00:00:00.000 Started by upstream project "autotest-per-patch" build number 130929 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.066 The recommended git tool is: git 00:00:00.066 using credential 00000000-0000-0000-0000-000000000002 00:00:00.068 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/vfio-user-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.169 Using shallow fetch with depth 1 00:00:00.169 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.169 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.246 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.246 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.854 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.866 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.877 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.877 > git config core.sparsecheckout # timeout=10 00:00:05.888 > git read-tree -mu HEAD # timeout=10 00:00:05.905 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.922 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.922 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:06.080 [Pipeline] Start of Pipeline 00:00:06.095 [Pipeline] library 00:00:06.096 Loading library shm_lib@master 00:00:06.097 Library shm_lib@master is cached. Copying from home. 00:00:06.115 [Pipeline] node 00:00:06.132 Running on WFP4 in /var/jenkins/workspace/vfio-user-phy-autotest 00:00:06.133 [Pipeline] { 00:00:06.140 [Pipeline] catchError 00:00:06.141 [Pipeline] { 00:00:06.148 [Pipeline] wrap 00:00:06.153 [Pipeline] { 00:00:06.161 [Pipeline] stage 00:00:06.163 [Pipeline] { (Prologue) 00:00:06.367 [Pipeline] sh 00:00:06.648 + logger -p user.info -t JENKINS-CI 00:00:06.664 [Pipeline] echo 00:00:06.666 Node: WFP4 00:00:06.673 [Pipeline] sh 00:00:06.968 [Pipeline] setCustomBuildProperty 00:00:06.976 [Pipeline] echo 00:00:06.978 Cleanup processes 00:00:06.982 [Pipeline] sh 00:00:07.262 + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:00:07.262 1908611 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:00:07.273 [Pipeline] sh 00:00:07.553 ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:00:07.553 ++ grep -v 'sudo pgrep' 00:00:07.553 ++ awk '{print $1}' 00:00:07.553 + sudo kill -9 00:00:07.553 + true 00:00:07.565 [Pipeline] cleanWs 00:00:07.574 [WS-CLEANUP] Deleting project workspace... 00:00:07.574 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.580 [WS-CLEANUP] done 00:00:07.583 [Pipeline] setCustomBuildProperty 00:00:07.596 [Pipeline] sh 00:00:07.874 + sudo git config --global --replace-all safe.directory '*' 00:00:07.966 [Pipeline] httpRequest 00:00:08.404 [Pipeline] echo 00:00:08.407 Sorcerer 10.211.164.101 is alive 00:00:08.415 [Pipeline] retry 00:00:08.416 [Pipeline] { 00:00:08.429 [Pipeline] httpRequest 00:00:08.434 HttpMethod: GET 00:00:08.434 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.434 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.436 Response Code: HTTP/1.1 200 OK 00:00:08.436 Success: Status code 200 is in the accepted range: 200,404 00:00:08.437 Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.626 [Pipeline] } 00:00:09.643 [Pipeline] // retry 00:00:09.650 [Pipeline] sh 00:00:09.941 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.953 [Pipeline] httpRequest 00:00:10.312 [Pipeline] echo 00:00:10.314 Sorcerer 10.211.164.101 is alive 00:00:10.323 [Pipeline] retry 00:00:10.325 [Pipeline] { 00:00:10.340 [Pipeline] httpRequest 00:00:10.344 HttpMethod: GET 00:00:10.345 URL: http://10.211.164.101/packages/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:10.345 Sending request to url: http://10.211.164.101/packages/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:00:10.369 Response Code: HTTP/1.1 200 OK 00:00:10.369 Success: Status code 200 is in the accepted range: 200,404 00:00:10.369 Saving response body to /var/jenkins/workspace/vfio-user-phy-autotest/spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:01:13.275 [Pipeline] } 00:01:13.300 [Pipeline] // retry 00:01:13.351 [Pipeline] sh 00:01:13.638 + tar --no-same-owner -xf spdk_6101e4048d5400f2ba64e4378da28dc592756098.tar.gz 00:01:16.185 [Pipeline] sh 00:01:16.467 + git -C spdk log --oneline -n5 00:01:16.467 6101e4048 vhost: defer the g_fini_cb after called 00:01:16.467 92108e0a2 fsdev/aio: add support for null IOs 00:01:16.467 dcdab59d3 lib/reduce: Check return code of read superblock 00:01:16.467 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:01:16.467 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:01:16.478 [Pipeline] } 00:01:16.492 [Pipeline] // stage 00:01:16.501 [Pipeline] stage 00:01:16.504 [Pipeline] { (Prepare) 00:01:16.526 [Pipeline] writeFile 00:01:16.542 [Pipeline] sh 00:01:16.826 + logger -p user.info -t JENKINS-CI 00:01:16.838 [Pipeline] sh 00:01:17.121 + logger -p user.info -t JENKINS-CI 00:01:17.133 [Pipeline] sh 00:01:17.417 + cat autorun-spdk.conf 00:01:17.417 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:17.417 SPDK_TEST_VFIOUSER_QEMU=1 00:01:17.417 SPDK_RUN_ASAN=1 00:01:17.417 SPDK_RUN_UBSAN=1 00:01:17.417 SPDK_TEST_SMA=1 00:01:17.424 RUN_NIGHTLY=0 00:01:17.429 [Pipeline] readFile 00:01:17.452 [Pipeline] copyArtifacts 00:01:20.357 Copied 1 artifact from "qemu-vfio" build number 34 00:01:20.362 [Pipeline] sh 00:01:20.678 + tar xf qemu-vfio.tar.gz 00:01:22.602 [Pipeline] copyArtifacts 00:01:22.622 Copied 1 artifact from "vagrant-build-vhost" build number 6 00:01:22.626 [Pipeline] sh 00:01:22.921 + sudo mkdir -p /var/spdk/dependencies/vhost 00:01:22.931 [Pipeline] sh 00:01:23.205 + cd /var/spdk/dependencies/vhost 00:01:23.205 + md5sum --quiet -c /var/jenkins/workspace/vfio-user-phy-autotest/spdk_test_image.qcow2.gz.md5 00:01:25.752 [Pipeline] withEnv 00:01:25.754 [Pipeline] { 00:01:25.769 [Pipeline] sh 00:01:26.046 + set -ex 00:01:26.046 + [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf ]] 00:01:26.046 + source /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf 00:01:26.046 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.046 ++ SPDK_TEST_VFIOUSER_QEMU=1 00:01:26.046 ++ SPDK_RUN_ASAN=1 00:01:26.046 ++ SPDK_RUN_UBSAN=1 00:01:26.046 ++ SPDK_TEST_SMA=1 00:01:26.046 ++ RUN_NIGHTLY=0 00:01:26.046 + case $SPDK_TEST_NVMF_NICS in 00:01:26.046 + DRIVERS= 00:01:26.046 + [[ -n '' ]] 00:01:26.046 + exit 0 00:01:26.055 [Pipeline] } 00:01:26.071 [Pipeline] // withEnv 00:01:26.076 [Pipeline] } 00:01:26.090 [Pipeline] // stage 00:01:26.100 [Pipeline] catchError 00:01:26.102 [Pipeline] { 00:01:26.116 [Pipeline] timeout 00:01:26.116 Timeout set to expire in 35 min 00:01:26.118 [Pipeline] { 00:01:26.132 [Pipeline] stage 00:01:26.134 [Pipeline] { (Tests) 00:01:26.148 [Pipeline] sh 00:01:26.430 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/vfio-user-phy-autotest 00:01:26.430 ++ readlink -f /var/jenkins/workspace/vfio-user-phy-autotest 00:01:26.430 + DIR_ROOT=/var/jenkins/workspace/vfio-user-phy-autotest 00:01:26.430 + [[ -n /var/jenkins/workspace/vfio-user-phy-autotest ]] 00:01:26.430 + DIR_SPDK=/var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:01:26.430 + DIR_OUTPUT=/var/jenkins/workspace/vfio-user-phy-autotest/output 00:01:26.430 + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk ]] 00:01:26.430 + [[ ! -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]] 00:01:26.430 + mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/output 00:01:26.430 + [[ -d /var/jenkins/workspace/vfio-user-phy-autotest/output ]] 00:01:26.430 + [[ vfio-user-phy-autotest == pkgdep-* ]] 00:01:26.430 + cd /var/jenkins/workspace/vfio-user-phy-autotest 00:01:26.430 + source /etc/os-release 00:01:26.430 ++ NAME='Fedora Linux' 00:01:26.430 ++ VERSION='39 (Cloud Edition)' 00:01:26.430 ++ ID=fedora 00:01:26.430 ++ VERSION_ID=39 00:01:26.430 ++ VERSION_CODENAME= 00:01:26.430 ++ PLATFORM_ID=platform:f39 00:01:26.430 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:26.430 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:26.430 ++ LOGO=fedora-logo-icon 00:01:26.430 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:26.430 ++ HOME_URL=https://fedoraproject.org/ 00:01:26.430 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:26.430 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:26.430 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:26.430 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:26.430 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:26.430 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:26.430 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:26.430 ++ SUPPORT_END=2024-11-12 00:01:26.430 ++ VARIANT='Cloud Edition' 00:01:26.430 ++ VARIANT_ID=cloud 00:01:26.430 + uname -a 00:01:26.430 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:26.430 + sudo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status 00:01:28.983 Hugepages 00:01:28.983 node hugesize free / total 00:01:28.983 node0 1048576kB 0 / 0 00:01:28.983 node0 2048kB 0 / 0 00:01:28.983 node1 1048576kB 0 / 0 00:01:28.983 node1 2048kB 0 / 0 00:01:28.983 00:01:28.983 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.983 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:28.983 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:28.983 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:28.983 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:28.983 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:28.983 + rm -f /tmp/spdk-ld-path 00:01:28.983 + source autorun-spdk.conf 00:01:28.983 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.983 ++ SPDK_TEST_VFIOUSER_QEMU=1 00:01:28.983 ++ SPDK_RUN_ASAN=1 00:01:28.983 ++ SPDK_RUN_UBSAN=1 00:01:28.983 ++ SPDK_TEST_SMA=1 00:01:28.983 ++ RUN_NIGHTLY=0 00:01:28.983 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.983 + [[ -n '' ]] 00:01:28.983 + sudo git config --global --add safe.directory /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:01:28.983 + for M in /var/spdk/build-*-manifest.txt 00:01:28.983 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.983 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/ 00:01:28.983 + for M in /var/spdk/build-*-manifest.txt 00:01:28.983 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.983 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/ 00:01:28.983 + for M in /var/spdk/build-*-manifest.txt 00:01:28.983 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.983 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/vfio-user-phy-autotest/output/ 00:01:28.983 ++ uname 00:01:28.983 + [[ Linux == \L\i\n\u\x ]] 00:01:28.983 + sudo dmesg -T 00:01:28.983 + sudo dmesg --clear 00:01:28.983 + dmesg_pid=1909558 00:01:28.983 + [[ Fedora Linux == FreeBSD ]] 00:01:28.983 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.983 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.983 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.983 + sudo dmesg -Tw 00:01:28.983 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:28.983 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:28.983 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.983 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.983 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.983 + [[ /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\v\f\i\o\-\u\s\e\r\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.983 ++ ldd /var/jenkins/workspace/vfio-user-phy-autotest/qemu_vfio/bin/qemu-system-x86_64 00:01:29.242 + deps=' linux-vdso.so.1 (0x00007ffe3e440000) 00:01:29.242 libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007fd4acac8000) 00:01:29.242 libz.so.1 => /usr/lib64/libz.so.1 (0x00007fd4acaae000) 00:01:29.242 libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007fd4aca77000) 00:01:29.242 libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007fd4aca1e000) 00:01:29.242 libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007fd4aca11000) 00:01:29.242 libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007fd4aca02000) 00:01:29.242 libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007fd4ac828000) 00:01:29.242 libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007fd4ac7c8000) 00:01:29.242 libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007fd4ac67e000) 00:01:29.242 librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007fd4ac662000) 00:01:29.242 libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007fd4ac640000) 00:01:29.242 libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007fd4ac61e000) 00:01:29.242 libbpf.so.0 => not found 00:01:29.242 libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007fd4ac5dd000) 00:01:29.242 libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007fd4ac5a8000) 00:01:29.242 libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007fd4ac5a1000) 00:01:29.242 liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007fd4ac599000) 00:01:29.242 libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007fd4ac557000) 00:01:29.242 libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007fd4ac527000) 00:01:29.242 libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007fd4ac522000) 00:01:29.242 librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007fd4abc67000) 00:01:29.242 librados.so.2 => /usr/lib64/librados.so.2 (0x00007fd4aba9f000) 00:01:29.242 libm.so.6 => /usr/lib64/libm.so.6 (0x00007fd4ab9be000) 00:01:29.242 libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007fd4ab999000) 00:01:29.242 libc.so.6 => /usr/lib64/libc.so.6 (0x00007fd4ab7b5000) 00:01:29.242 /lib64/ld-linux-x86-64.so.2 (0x00007fd4adc2c000) 00:01:29.243 libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007fd4ab7ab000) 00:01:29.243 libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007fd4ab77e000) 00:01:29.243 libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007fd4ab774000) 00:01:29.243 libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007fd4ab758000) 00:01:29.243 libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007fd4ab705000) 00:01:29.243 libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007fd4ab6d8000) 00:01:29.243 libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007fd4ab6c8000) 00:01:29.243 libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007fd4ab62d000) 00:01:29.243 libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007fd4ab608000) 00:01:29.243 libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007fd4ab570000) 00:01:29.243 libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007fd4ab436000) 00:01:29.243 libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007fd4ab393000) 00:01:29.243 libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007fd4ab312000) 00:01:29.243 libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007fd4aa6e2000) 00:01:29.243 libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007fd4aa209000) 00:01:29.243 libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fd4a9fb3000) 00:01:29.243 libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007fd4a9ef4000) 00:01:29.243 liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007fd4a9ec1000) 00:01:29.243 libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007fd4a9e85000) 00:01:29.243 libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007fd4a9e5f000) 00:01:29.243 libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007fd4a9e00000) 00:01:29.243 libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007fd4a9df8000) 00:01:29.243 libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007fd4a9de4000) 00:01:29.243 libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007fd4a9dd3000) 00:01:29.243 libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007fd4a9d1f000) 00:01:29.243 libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007fd4a9c85000) 00:01:29.243 libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007fd4a9c58000) 00:01:29.243 libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007fd4a9c36000) 00:01:29.243 libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007fd4a9bc3000) 00:01:29.243 libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007fd4a9baf000) 00:01:29.243 libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007fd4a9b59000) 00:01:29.243 libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007fd4a9af2000) 00:01:29.243 liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007fd4a9ae0000) 00:01:29.243 libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007fd4a9ad2000) 00:01:29.243 libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007fd4a9922000) 00:01:29.243 libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007fd4a9849000) 00:01:29.243 libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007fd4a982f000) 00:01:29.243 libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007fd4a9828000) 00:01:29.243 libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007fd4a9818000) 00:01:29.243 libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007fd4a9811000) 00:01:29.243 libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007fd4a97b9000) 00:01:29.243 libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007fd4a979a000) 00:01:29.243 libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007fd4a9775000) 00:01:29.243 libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007fd4a973c000)' 00:01:29.243 + [[ linux-vdso.so.1 (0x00007ffe3e440000) 00:01:29.243 libpixman-1.so.0 => /usr/lib64/libpixman-1.so.0 (0x00007fd4acac8000) 00:01:29.243 libz.so.1 => /usr/lib64/libz.so.1 (0x00007fd4acaae000) 00:01:29.243 libudev.so.1 => /usr/lib64/libudev.so.1 (0x00007fd4aca77000) 00:01:29.243 libpmem.so.1 => /usr/lib64/libpmem.so.1 (0x00007fd4aca1e000) 00:01:29.243 libdaxctl.so.1 => /usr/lib64/libdaxctl.so.1 (0x00007fd4aca11000) 00:01:29.243 libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x00007fd4aca02000) 00:01:29.243 libgio-2.0.so.0 => /usr/lib64/libgio-2.0.so.0 (0x00007fd4ac828000) 00:01:29.243 libgobject-2.0.so.0 => /usr/lib64/libgobject-2.0.so.0 (0x00007fd4ac7c8000) 00:01:29.243 libglib-2.0.so.0 => /usr/lib64/libglib-2.0.so.0 (0x00007fd4ac67e000) 00:01:29.243 librdmacm.so.1 => /usr/lib64/librdmacm.so.1 (0x00007fd4ac662000) 00:01:29.243 libibverbs.so.1 => /usr/lib64/libibverbs.so.1 (0x00007fd4ac640000) 00:01:29.243 libslirp.so.0 => /usr/lib64/libslirp.so.0 (0x00007fd4ac61e000) 00:01:29.243 libbpf.so.0 => not found 00:01:29.243 libncursesw.so.6 => /usr/lib64/libncursesw.so.6 (0x00007fd4ac5dd000) 00:01:29.243 libtinfo.so.6 => /usr/lib64/libtinfo.so.6 (0x00007fd4ac5a8000) 00:01:29.243 libgmodule-2.0.so.0 => /usr/lib64/libgmodule-2.0.so.0 (0x00007fd4ac5a1000) 00:01:29.243 liburing.so.2 => /usr/lib64/liburing.so.2 (0x00007fd4ac599000) 00:01:29.243 libfuse3.so.3 => /usr/lib64/libfuse3.so.3 (0x00007fd4ac557000) 00:01:29.243 libiscsi.so.9 => /usr/lib64/iscsi/libiscsi.so.9 (0x00007fd4ac527000) 00:01:29.243 libaio.so.1 => /usr/lib64/libaio.so.1 (0x00007fd4ac522000) 00:01:29.243 librbd.so.1 => /usr/lib64/librbd.so.1 (0x00007fd4abc67000) 00:01:29.243 librados.so.2 => /usr/lib64/librados.so.2 (0x00007fd4aba9f000) 00:01:29.243 libm.so.6 => /usr/lib64/libm.so.6 (0x00007fd4ab9be000) 00:01:29.243 libgcc_s.so.1 => /usr/lib64/libgcc_s.so.1 (0x00007fd4ab999000) 00:01:29.243 libc.so.6 => /usr/lib64/libc.so.6 (0x00007fd4ab7b5000) 00:01:29.243 /lib64/ld-linux-x86-64.so.2 (0x00007fd4adc2c000) 00:01:29.243 libcap.so.2 => /usr/lib64/libcap.so.2 (0x00007fd4ab7ab000) 00:01:29.243 libndctl.so.6 => /usr/lib64/libndctl.so.6 (0x00007fd4ab77e000) 00:01:29.243 libuuid.so.1 => /usr/lib64/libuuid.so.1 (0x00007fd4ab774000) 00:01:29.243 libkmod.so.2 => /usr/lib64/libkmod.so.2 (0x00007fd4ab758000) 00:01:29.243 libmount.so.1 => /usr/lib64/libmount.so.1 (0x00007fd4ab705000) 00:01:29.243 libselinux.so.1 => /usr/lib64/libselinux.so.1 (0x00007fd4ab6d8000) 00:01:29.243 libffi.so.8 => /usr/lib64/libffi.so.8 (0x00007fd4ab6c8000) 00:01:29.243 libpcre2-8.so.0 => /usr/lib64/libpcre2-8.so.0 (0x00007fd4ab62d000) 00:01:29.243 libnl-3.so.200 => /usr/lib64/libnl-3.so.200 (0x00007fd4ab608000) 00:01:29.243 libnl-route-3.so.200 => /usr/lib64/libnl-route-3.so.200 (0x00007fd4ab570000) 00:01:29.243 libgcrypt.so.20 => /usr/lib64/libgcrypt.so.20 (0x00007fd4ab436000) 00:01:29.243 libssl.so.3 => /usr/lib64/libssl.so.3 (0x00007fd4ab393000) 00:01:29.243 libcryptsetup.so.12 => /usr/lib64/libcryptsetup.so.12 (0x00007fd4ab312000) 00:01:29.243 libceph-common.so.2 => /usr/lib64/ceph/libceph-common.so.2 (0x00007fd4aa6e2000) 00:01:29.243 libcrypto.so.3 => /usr/lib64/libcrypto.so.3 (0x00007fd4aa209000) 00:01:29.243 libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fd4a9fb3000) 00:01:29.243 libzstd.so.1 => /usr/lib64/libzstd.so.1 (0x00007fd4a9ef4000) 00:01:29.243 liblzma.so.5 => /usr/lib64/liblzma.so.5 (0x00007fd4a9ec1000) 00:01:29.243 libblkid.so.1 => /usr/lib64/libblkid.so.1 (0x00007fd4a9e85000) 00:01:29.243 libgpg-error.so.0 => /usr/lib64/libgpg-error.so.0 (0x00007fd4a9e5f000) 00:01:29.243 libdevmapper.so.1.02 => /usr/lib64/libdevmapper.so.1.02 (0x00007fd4a9e00000) 00:01:29.243 libargon2.so.1 => /usr/lib64/libargon2.so.1 (0x00007fd4a9df8000) 00:01:29.243 libjson-c.so.5 => /usr/lib64/libjson-c.so.5 (0x00007fd4a9de4000) 00:01:29.243 libresolv.so.2 => /usr/lib64/libresolv.so.2 (0x00007fd4a9dd3000) 00:01:29.243 libcurl.so.4 => /usr/lib64/libcurl.so.4 (0x00007fd4a9d1f000) 00:01:29.243 libthrift-0.15.0.so => /usr/lib64/libthrift-0.15.0.so (0x00007fd4a9c85000) 00:01:29.243 libnghttp2.so.14 => /usr/lib64/libnghttp2.so.14 (0x00007fd4a9c58000) 00:01:29.243 libidn2.so.0 => /usr/lib64/libidn2.so.0 (0x00007fd4a9c36000) 00:01:29.243 libssh.so.4 => /usr/lib64/libssh.so.4 (0x00007fd4a9bc3000) 00:01:29.243 libpsl.so.5 => /usr/lib64/libpsl.so.5 (0x00007fd4a9baf000) 00:01:29.243 libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x00007fd4a9b59000) 00:01:29.243 libldap.so.2 => /usr/lib64/libldap.so.2 (0x00007fd4a9af2000) 00:01:29.243 liblber.so.2 => /usr/lib64/liblber.so.2 (0x00007fd4a9ae0000) 00:01:29.243 libbrotlidec.so.1 => /usr/lib64/libbrotlidec.so.1 (0x00007fd4a9ad2000) 00:01:29.243 libunistring.so.5 => /usr/lib64/libunistring.so.5 (0x00007fd4a9922000) 00:01:29.243 libkrb5.so.3 => /usr/lib64/libkrb5.so.3 (0x00007fd4a9849000) 00:01:29.243 libk5crypto.so.3 => /usr/lib64/libk5crypto.so.3 (0x00007fd4a982f000) 00:01:29.243 libcom_err.so.2 => /usr/lib64/libcom_err.so.2 (0x00007fd4a9828000) 00:01:29.243 libkrb5support.so.0 => /usr/lib64/libkrb5support.so.0 (0x00007fd4a9818000) 00:01:29.243 libkeyutils.so.1 => /usr/lib64/libkeyutils.so.1 (0x00007fd4a9811000) 00:01:29.243 libevent-2.1.so.7 => /usr/lib64/libevent-2.1.so.7 (0x00007fd4a97b9000) 00:01:29.243 libsasl2.so.3 => /usr/lib64/libsasl2.so.3 (0x00007fd4a979a000) 00:01:29.243 libbrotlicommon.so.1 => /usr/lib64/libbrotlicommon.so.1 (0x00007fd4a9775000) 00:01:29.243 libcrypt.so.2 => /usr/lib64/libcrypt.so.2 (0x00007fd4a973c000) == *\n\o\t\ \f\o\u\n\d* ]] 00:01:29.243 + unset -v VFIO_QEMU_BIN 00:01:29.243 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.243 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.243 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.243 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:29.243 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.243 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.243 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:29.243 + spdk/autorun.sh /var/jenkins/workspace/vfio-user-phy-autotest/autorun-spdk.conf 00:01:29.243 Test configuration: 00:01:29.243 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.243 SPDK_TEST_VFIOUSER_QEMU=1 00:01:29.243 SPDK_RUN_ASAN=1 00:01:29.243 SPDK_RUN_UBSAN=1 00:01:29.243 SPDK_TEST_SMA=1 00:01:29.243 RUN_NIGHTLY=0 00:08:59 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:29.243 00:08:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh 00:01:29.243 00:08:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:29.243 00:08:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.243 00:08:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.243 00:08:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.243 00:08:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.243 00:08:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.243 00:08:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.243 00:08:59 -- paths/export.sh@5 -- $ export PATH 00:01:29.243 00:08:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:29.243 00:08:59 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output 00:01:29.243 00:08:59 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:29.243 00:08:59 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728425339.XXXXXX 00:01:29.243 00:08:59 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728425339.lkr6aB 00:01:29.243 00:08:59 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:29.243 00:08:59 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:29.243 00:08:59 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/' 00:01:29.243 00:08:59 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:29.243 00:08:59 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.243 00:08:59 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:29.243 00:08:59 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:29.243 00:08:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.243 00:08:59 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto' 00:01:29.243 00:08:59 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:29.243 00:08:59 -- pm/common@17 -- $ local monitor 00:01:29.243 00:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.243 00:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.243 00:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.243 00:08:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.243 00:08:59 -- pm/common@25 -- $ sleep 1 00:01:29.243 00:08:59 -- pm/common@21 -- $ date +%s 00:01:29.243 00:08:59 -- pm/common@21 -- $ date +%s 00:01:29.243 00:08:59 -- pm/common@21 -- $ date +%s 00:01:29.243 00:08:59 -- pm/common@21 -- $ date +%s 00:01:29.243 00:08:59 -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425339 00:01:29.243 00:08:59 -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425339 00:01:29.243 00:08:59 -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425339 00:01:29.243 00:08:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728425339 00:01:29.243 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425339_collect-cpu-load.pm.log 00:01:29.243 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425339_collect-cpu-temp.pm.log 00:01:29.243 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425339_collect-vmstat.pm.log 00:01:29.243 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728425339_collect-bmc-pm.bmc.pm.log 00:01:30.184 00:09:00 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:30.184 00:09:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.184 00:09:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.184 00:09:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:01:30.184 00:09:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.184 Tue Oct 8 10:09:00 PM UTC 2024 00:01:30.184 00:09:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.184 v25.01-pre-42-g6101e4048 00:01:30.184 00:09:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:30.184 00:09:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:30.184 00:09:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:30.184 00:09:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:30.184 00:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.184 ************************************ 00:01:30.184 START TEST asan 00:01:30.184 ************************************ 00:01:30.184 00:09:00 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:30.184 using asan 00:01:30.184 00:01:30.184 real 0m0.000s 00:01:30.184 user 0m0.000s 00:01:30.184 sys 0m0.000s 00:01:30.184 00:09:00 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:30.184 00:09:00 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.184 ************************************ 00:01:30.184 END TEST asan 00:01:30.184 ************************************ 00:01:30.184 00:09:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:30.184 00:09:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:30.184 00:09:00 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:30.184 00:09:00 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:30.184 00:09:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.445 ************************************ 00:01:30.445 START TEST ubsan 00:01:30.445 ************************************ 00:01:30.445 00:09:00 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:30.445 using ubsan 00:01:30.445 00:01:30.445 real 0m0.000s 00:01:30.445 user 0m0.000s 00:01:30.445 sys 0m0.000s 00:01:30.445 00:09:00 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:30.445 00:09:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.445 ************************************ 00:01:30.445 END TEST ubsan 00:01:30.445 ************************************ 00:01:30.445 00:09:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:30.445 00:09:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:30.445 00:09:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:30.445 00:09:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:30.445 00:09:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:30.445 00:09:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:30.445 00:09:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:30.445 00:09:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:30.445 00:09:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto --with-shared 00:01:30.445 Using default SPDK env in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk 00:01:30.445 Using default DPDK in /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build 00:01:30.703 Using 'verbs' RDMA provider 00:01:43.840 Configuring ISA-L (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:56.062 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:56.062 Creating mk/config.mk...done. 00:01:56.062 Creating mk/cc.flags.mk...done. 00:01:56.062 Type 'make' to build. 00:01:56.062 00:09:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:56.062 00:09:25 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:56.062 00:09:25 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:56.062 00:09:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.062 ************************************ 00:01:56.062 START TEST make 00:01:56.062 ************************************ 00:01:56.062 00:09:25 make -- common/autotest_common.sh@1125 -- $ make -j96 00:01:56.062 make[1]: Nothing to be done for 'all'. 00:01:57.454 The Meson build system 00:01:57.454 Version: 1.5.0 00:01:57.454 Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user 00:01:57.454 Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.454 Build type: native build 00:01:57.454 Project name: libvfio-user 00:01:57.454 Project version: 0.0.1 00:01:57.454 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:57.454 C linker for the host machine: cc ld.bfd 2.40-14 00:01:57.454 Host machine cpu family: x86_64 00:01:57.454 Host machine cpu: x86_64 00:01:57.454 Run-time dependency threads found: YES 00:01:57.454 Library dl found: YES 00:01:57.454 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:57.454 Run-time dependency json-c found: YES 0.17 00:01:57.454 Run-time dependency cmocka found: YES 1.1.7 00:01:57.454 Program pytest-3 found: NO 00:01:57.454 Program flake8 found: NO 00:01:57.454 Program misspell-fixer found: NO 00:01:57.454 Program restructuredtext-lint found: NO 00:01:57.454 Program valgrind found: YES (/usr/bin/valgrind) 00:01:57.454 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.454 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.454 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.454 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:57.454 Program test-lspci.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:57.454 Program test-linkage.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:57.454 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:57.454 Build targets in project: 8 00:01:57.454 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:57.454 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:57.454 00:01:57.454 libvfio-user 0.0.1 00:01:57.454 00:01:57.454 User defined options 00:01:57.454 buildtype : debug 00:01:57.454 default_library: shared 00:01:57.454 libdir : /usr/local/lib 00:01:57.454 00:01:57.454 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.399 ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:58.399 [1/37] Compiling C object samples/null.p/null.c.o 00:01:58.399 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:58.399 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:58.399 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:58.399 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:58.399 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:58.399 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:58.399 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:58.399 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:58.399 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:58.399 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:58.399 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:58.400 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:58.400 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:58.400 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:58.400 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:58.400 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:58.400 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:58.400 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:58.400 [20/37] Compiling C object samples/client.p/client.c.o 00:01:58.400 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:58.400 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:58.400 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:58.400 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:58.400 [25/37] Compiling C object samples/server.p/server.c.o 00:01:58.400 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:58.676 [27/37] Linking target samples/client 00:01:58.676 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:58.676 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:58.676 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:58.676 [31/37] Linking target test/unit_tests 00:01:58.949 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:58.949 [33/37] Linking target samples/server 00:01:58.949 [34/37] Linking target samples/lspci 00:01:58.949 [35/37] Linking target samples/gpio-pci-idio-16 00:01:58.949 [36/37] Linking target samples/null 00:01:58.949 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:58.949 INFO: autodetecting backend as ninja 00:01:58.949 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:59.207 DESTDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:59.773 ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:59.773 ninja: no work to do. 00:02:26.302 The Meson build system 00:02:26.302 Version: 1.5.0 00:02:26.302 Source dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk 00:02:26.302 Build dir: /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp 00:02:26.302 Build type: native build 00:02:26.302 Program cat found: YES (/usr/bin/cat) 00:02:26.302 Project name: DPDK 00:02:26.302 Project version: 24.03.0 00:02:26.302 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.302 C linker for the host machine: cc ld.bfd 2.40-14 00:02:26.302 Host machine cpu family: x86_64 00:02:26.302 Host machine cpu: x86_64 00:02:26.302 Message: ## Building in Developer Mode ## 00:02:26.302 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.302 Program check-symbols.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:26.302 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.302 Program python3 found: YES (/usr/bin/python3) 00:02:26.302 Program cat found: YES (/usr/bin/cat) 00:02:26.302 Compiler for C supports arguments -march=native: YES 00:02:26.302 Checking for size of "void *" : 8 00:02:26.302 Checking for size of "void *" : 8 (cached) 00:02:26.302 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:26.302 Library m found: YES 00:02:26.302 Library numa found: YES 00:02:26.302 Has header "numaif.h" : YES 00:02:26.302 Library fdt found: NO 00:02:26.302 Library execinfo found: NO 00:02:26.302 Has header "execinfo.h" : YES 00:02:26.302 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.302 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.302 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.302 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.302 Run-time dependency openssl found: YES 3.1.1 00:02:26.302 Run-time dependency libpcap found: YES 1.10.4 00:02:26.302 Has header "pcap.h" with dependency libpcap: YES 00:02:26.302 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.302 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.302 Compiler for C supports arguments -Wformat: YES 00:02:26.302 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.302 Compiler for C supports arguments -Wformat-security: NO 00:02:26.302 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.302 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.302 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.302 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.302 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.302 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.302 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.302 Compiler for C supports arguments -Wundef: YES 00:02:26.302 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.302 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.302 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.302 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.302 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.302 Program objdump found: YES (/usr/bin/objdump) 00:02:26.302 Compiler for C supports arguments -mavx512f: YES 00:02:26.302 Checking if "AVX512 checking" compiles: YES 00:02:26.302 Fetching value of define "__SSE4_2__" : 1 00:02:26.302 Fetching value of define "__AES__" : 1 00:02:26.302 Fetching value of define "__AVX__" : 1 00:02:26.302 Fetching value of define "__AVX2__" : 1 00:02:26.302 Fetching value of define "__AVX512BW__" : 1 00:02:26.302 Fetching value of define "__AVX512CD__" : 1 00:02:26.302 Fetching value of define "__AVX512DQ__" : 1 00:02:26.302 Fetching value of define "__AVX512F__" : 1 00:02:26.302 Fetching value of define "__AVX512VL__" : 1 00:02:26.302 Fetching value of define "__PCLMUL__" : 1 00:02:26.302 Fetching value of define "__RDRND__" : 1 00:02:26.302 Fetching value of define "__RDSEED__" : 1 00:02:26.302 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.302 Fetching value of define "__znver1__" : (undefined) 00:02:26.302 Fetching value of define "__znver2__" : (undefined) 00:02:26.302 Fetching value of define "__znver3__" : (undefined) 00:02:26.302 Fetching value of define "__znver4__" : (undefined) 00:02:26.302 Library asan found: YES 00:02:26.302 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.302 Message: lib/log: Defining dependency "log" 00:02:26.302 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.302 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.302 Library rt found: YES 00:02:26.302 Checking for function "getentropy" : NO 00:02:26.302 Message: lib/eal: Defining dependency "eal" 00:02:26.302 Message: lib/ring: Defining dependency "ring" 00:02:26.302 Message: lib/rcu: Defining dependency "rcu" 00:02:26.302 Message: lib/mempool: Defining dependency "mempool" 00:02:26.302 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.302 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.302 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:26.302 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:26.302 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:26.302 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:26.302 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:26.302 Compiler for C supports arguments -mpclmul: YES 00:02:26.302 Compiler for C supports arguments -maes: YES 00:02:26.302 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.302 Compiler for C supports arguments -mavx512bw: YES 00:02:26.302 Compiler for C supports arguments -mavx512dq: YES 00:02:26.302 Compiler for C supports arguments -mavx512vl: YES 00:02:26.302 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.302 Compiler for C supports arguments -mavx2: YES 00:02:26.302 Compiler for C supports arguments -mavx: YES 00:02:26.302 Message: lib/net: Defining dependency "net" 00:02:26.302 Message: lib/meter: Defining dependency "meter" 00:02:26.302 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.302 Message: lib/pci: Defining dependency "pci" 00:02:26.302 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.302 Message: lib/hash: Defining dependency "hash" 00:02:26.302 Message: lib/timer: Defining dependency "timer" 00:02:26.302 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.302 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.302 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.302 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.302 Message: lib/power: Defining dependency "power" 00:02:26.302 Message: lib/reorder: Defining dependency "reorder" 00:02:26.302 Message: lib/security: Defining dependency "security" 00:02:26.302 Has header "linux/userfaultfd.h" : YES 00:02:26.302 Has header "linux/vduse.h" : YES 00:02:26.302 Message: lib/vhost: Defining dependency "vhost" 00:02:26.302 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.302 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:02:26.302 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.302 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.302 Compiler for C supports arguments -std=c11: YES 00:02:26.302 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:02:26.302 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:02:26.302 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:02:26.302 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:02:26.302 Run-time dependency libmlx5 found: YES 1.24.51.0 00:02:26.302 Run-time dependency libibverbs found: YES 1.14.51.0 00:02:26.302 Library mtcr_ul found: NO 00:02:26.302 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:02:26.302 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:02:26.302 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:02:26.303 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:02:28.835 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:02:28.835 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:02:28.836 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:02:28.836 Configuring mlx5_autoconf.h using configuration 00:02:28.836 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:02:28.836 Run-time dependency libcrypto found: YES 3.1.1 00:02:28.836 Library IPSec_MB found: YES 00:02:28.836 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:02:28.836 Message: drivers/common/qat: Defining dependency "common_qat" 00:02:28.836 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:28.836 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:28.836 Library IPSec_MB found: YES 00:02:28.836 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:02:28.836 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:02:28.836 Compiler for C supports arguments -std=c11: YES (cached) 00:02:28.836 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:28.836 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:28.836 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:28.836 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:28.836 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:02:28.836 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:28.836 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:28.836 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:28.836 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:28.836 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:28.836 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:28.836 Configuring doxy-api-html.conf using configuration 00:02:28.836 Configuring doxy-api-man.conf using configuration 00:02:28.836 Program mandb found: YES (/usr/bin/mandb) 00:02:28.836 Program sphinx-build found: NO 00:02:28.836 Configuring rte_build_config.h using configuration 00:02:28.836 Message: 00:02:28.836 ================= 00:02:28.836 Applications Enabled 00:02:28.836 ================= 00:02:28.836 00:02:28.836 apps: 00:02:28.836 00:02:28.836 00:02:28.836 Message: 00:02:28.836 ================= 00:02:28.836 Libraries Enabled 00:02:28.836 ================= 00:02:28.836 00:02:28.836 libs: 00:02:28.836 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:28.836 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:28.836 cryptodev, dmadev, power, reorder, security, vhost, 00:02:28.836 00:02:28.836 Message: 00:02:28.836 =============== 00:02:28.836 Drivers Enabled 00:02:28.836 =============== 00:02:28.836 00:02:28.836 common: 00:02:28.836 mlx5, qat, 00:02:28.836 bus: 00:02:28.836 auxiliary, pci, vdev, 00:02:28.836 mempool: 00:02:28.836 ring, 00:02:28.836 dma: 00:02:28.836 00:02:28.836 net: 00:02:28.836 00:02:28.836 crypto: 00:02:28.836 ipsec_mb, mlx5, 00:02:28.836 compress: 00:02:28.836 00:02:28.836 vdpa: 00:02:28.836 00:02:28.836 00:02:28.836 Message: 00:02:28.836 ================= 00:02:28.836 Content Skipped 00:02:28.836 ================= 00:02:28.836 00:02:28.836 apps: 00:02:28.836 dumpcap: explicitly disabled via build config 00:02:28.836 graph: explicitly disabled via build config 00:02:28.836 pdump: explicitly disabled via build config 00:02:28.836 proc-info: explicitly disabled via build config 00:02:28.836 test-acl: explicitly disabled via build config 00:02:28.836 test-bbdev: explicitly disabled via build config 00:02:28.836 test-cmdline: explicitly disabled via build config 00:02:28.836 test-compress-perf: explicitly disabled via build config 00:02:28.836 test-crypto-perf: explicitly disabled via build config 00:02:28.836 test-dma-perf: explicitly disabled via build config 00:02:28.836 test-eventdev: explicitly disabled via build config 00:02:28.836 test-fib: explicitly disabled via build config 00:02:28.836 test-flow-perf: explicitly disabled via build config 00:02:28.836 test-gpudev: explicitly disabled via build config 00:02:28.836 test-mldev: explicitly disabled via build config 00:02:28.836 test-pipeline: explicitly disabled via build config 00:02:28.836 test-pmd: explicitly disabled via build config 00:02:28.836 test-regex: explicitly disabled via build config 00:02:28.836 test-sad: explicitly disabled via build config 00:02:28.836 test-security-perf: explicitly disabled via build config 00:02:28.836 00:02:28.836 libs: 00:02:28.836 argparse: explicitly disabled via build config 00:02:28.836 metrics: explicitly disabled via build config 00:02:28.836 acl: explicitly disabled via build config 00:02:28.836 bbdev: explicitly disabled via build config 00:02:28.836 bitratestats: explicitly disabled via build config 00:02:28.836 bpf: explicitly disabled via build config 00:02:28.836 cfgfile: explicitly disabled via build config 00:02:28.836 distributor: explicitly disabled via build config 00:02:28.836 efd: explicitly disabled via build config 00:02:28.836 eventdev: explicitly disabled via build config 00:02:28.836 dispatcher: explicitly disabled via build config 00:02:28.836 gpudev: explicitly disabled via build config 00:02:28.836 gro: explicitly disabled via build config 00:02:28.836 gso: explicitly disabled via build config 00:02:28.836 ip_frag: explicitly disabled via build config 00:02:28.836 jobstats: explicitly disabled via build config 00:02:28.836 latencystats: explicitly disabled via build config 00:02:28.836 lpm: explicitly disabled via build config 00:02:28.836 member: explicitly disabled via build config 00:02:28.836 pcapng: explicitly disabled via build config 00:02:28.836 rawdev: explicitly disabled via build config 00:02:28.836 regexdev: explicitly disabled via build config 00:02:28.836 mldev: explicitly disabled via build config 00:02:28.836 rib: explicitly disabled via build config 00:02:28.836 sched: explicitly disabled via build config 00:02:28.836 stack: explicitly disabled via build config 00:02:28.836 ipsec: explicitly disabled via build config 00:02:28.836 pdcp: explicitly disabled via build config 00:02:28.836 fib: explicitly disabled via build config 00:02:28.836 port: explicitly disabled via build config 00:02:28.836 pdump: explicitly disabled via build config 00:02:28.836 table: explicitly disabled via build config 00:02:28.836 pipeline: explicitly disabled via build config 00:02:28.836 graph: explicitly disabled via build config 00:02:28.836 node: explicitly disabled via build config 00:02:28.836 00:02:28.836 drivers: 00:02:28.836 common/cpt: not in enabled drivers build config 00:02:28.836 common/dpaax: not in enabled drivers build config 00:02:28.836 common/iavf: not in enabled drivers build config 00:02:28.836 common/idpf: not in enabled drivers build config 00:02:28.836 common/ionic: not in enabled drivers build config 00:02:28.836 common/mvep: not in enabled drivers build config 00:02:28.836 common/octeontx: not in enabled drivers build config 00:02:28.836 bus/cdx: not in enabled drivers build config 00:02:28.836 bus/dpaa: not in enabled drivers build config 00:02:28.836 bus/fslmc: not in enabled drivers build config 00:02:28.836 bus/ifpga: not in enabled drivers build config 00:02:28.836 bus/platform: not in enabled drivers build config 00:02:28.836 bus/uacce: not in enabled drivers build config 00:02:28.836 bus/vmbus: not in enabled drivers build config 00:02:28.836 common/cnxk: not in enabled drivers build config 00:02:28.836 common/nfp: not in enabled drivers build config 00:02:28.836 common/nitrox: not in enabled drivers build config 00:02:28.836 common/sfc_efx: not in enabled drivers build config 00:02:28.836 mempool/bucket: not in enabled drivers build config 00:02:28.836 mempool/cnxk: not in enabled drivers build config 00:02:28.836 mempool/dpaa: not in enabled drivers build config 00:02:28.836 mempool/dpaa2: not in enabled drivers build config 00:02:28.836 mempool/octeontx: not in enabled drivers build config 00:02:28.836 mempool/stack: not in enabled drivers build config 00:02:28.836 dma/cnxk: not in enabled drivers build config 00:02:28.836 dma/dpaa: not in enabled drivers build config 00:02:28.836 dma/dpaa2: not in enabled drivers build config 00:02:28.836 dma/hisilicon: not in enabled drivers build config 00:02:28.836 dma/idxd: not in enabled drivers build config 00:02:28.836 dma/ioat: not in enabled drivers build config 00:02:28.836 dma/skeleton: not in enabled drivers build config 00:02:28.836 net/af_packet: not in enabled drivers build config 00:02:28.836 net/af_xdp: not in enabled drivers build config 00:02:28.836 net/ark: not in enabled drivers build config 00:02:28.836 net/atlantic: not in enabled drivers build config 00:02:28.836 net/avp: not in enabled drivers build config 00:02:28.836 net/axgbe: not in enabled drivers build config 00:02:28.836 net/bnx2x: not in enabled drivers build config 00:02:28.836 net/bnxt: not in enabled drivers build config 00:02:28.837 net/bonding: not in enabled drivers build config 00:02:28.837 net/cnxk: not in enabled drivers build config 00:02:28.837 net/cpfl: not in enabled drivers build config 00:02:28.837 net/cxgbe: not in enabled drivers build config 00:02:28.837 net/dpaa: not in enabled drivers build config 00:02:28.837 net/dpaa2: not in enabled drivers build config 00:02:28.837 net/e1000: not in enabled drivers build config 00:02:28.837 net/ena: not in enabled drivers build config 00:02:28.837 net/enetc: not in enabled drivers build config 00:02:28.837 net/enetfec: not in enabled drivers build config 00:02:28.837 net/enic: not in enabled drivers build config 00:02:28.837 net/failsafe: not in enabled drivers build config 00:02:28.837 net/fm10k: not in enabled drivers build config 00:02:28.837 net/gve: not in enabled drivers build config 00:02:28.837 net/hinic: not in enabled drivers build config 00:02:28.837 net/hns3: not in enabled drivers build config 00:02:28.837 net/i40e: not in enabled drivers build config 00:02:28.837 net/iavf: not in enabled drivers build config 00:02:28.837 net/ice: not in enabled drivers build config 00:02:28.837 net/idpf: not in enabled drivers build config 00:02:28.837 net/igc: not in enabled drivers build config 00:02:28.837 net/ionic: not in enabled drivers build config 00:02:28.837 net/ipn3ke: not in enabled drivers build config 00:02:28.837 net/ixgbe: not in enabled drivers build config 00:02:28.837 net/mana: not in enabled drivers build config 00:02:28.837 net/memif: not in enabled drivers build config 00:02:28.837 net/mlx4: not in enabled drivers build config 00:02:28.837 net/mlx5: not in enabled drivers build config 00:02:28.837 net/mvneta: not in enabled drivers build config 00:02:28.837 net/mvpp2: not in enabled drivers build config 00:02:28.837 net/netvsc: not in enabled drivers build config 00:02:28.837 net/nfb: not in enabled drivers build config 00:02:28.837 net/nfp: not in enabled drivers build config 00:02:28.837 net/ngbe: not in enabled drivers build config 00:02:28.837 net/null: not in enabled drivers build config 00:02:28.837 net/octeontx: not in enabled drivers build config 00:02:28.837 net/octeon_ep: not in enabled drivers build config 00:02:28.837 net/pcap: not in enabled drivers build config 00:02:28.837 net/pfe: not in enabled drivers build config 00:02:28.837 net/qede: not in enabled drivers build config 00:02:28.837 net/ring: not in enabled drivers build config 00:02:28.837 net/sfc: not in enabled drivers build config 00:02:28.837 net/softnic: not in enabled drivers build config 00:02:28.837 net/tap: not in enabled drivers build config 00:02:28.837 net/thunderx: not in enabled drivers build config 00:02:28.837 net/txgbe: not in enabled drivers build config 00:02:28.837 net/vdev_netvsc: not in enabled drivers build config 00:02:28.837 net/vhost: not in enabled drivers build config 00:02:28.837 net/virtio: not in enabled drivers build config 00:02:28.837 net/vmxnet3: not in enabled drivers build config 00:02:28.837 raw/*: missing internal dependency, "rawdev" 00:02:28.837 crypto/armv8: not in enabled drivers build config 00:02:28.837 crypto/bcmfs: not in enabled drivers build config 00:02:28.837 crypto/caam_jr: not in enabled drivers build config 00:02:28.837 crypto/ccp: not in enabled drivers build config 00:02:28.837 crypto/cnxk: not in enabled drivers build config 00:02:28.837 crypto/dpaa_sec: not in enabled drivers build config 00:02:28.837 crypto/dpaa2_sec: not in enabled drivers build config 00:02:28.837 crypto/mvsam: not in enabled drivers build config 00:02:28.837 crypto/nitrox: not in enabled drivers build config 00:02:28.837 crypto/null: not in enabled drivers build config 00:02:28.837 crypto/octeontx: not in enabled drivers build config 00:02:28.837 crypto/openssl: not in enabled drivers build config 00:02:28.837 crypto/scheduler: not in enabled drivers build config 00:02:28.837 crypto/uadk: not in enabled drivers build config 00:02:28.837 crypto/virtio: not in enabled drivers build config 00:02:28.837 compress/isal: not in enabled drivers build config 00:02:28.837 compress/mlx5: not in enabled drivers build config 00:02:28.837 compress/nitrox: not in enabled drivers build config 00:02:28.837 compress/octeontx: not in enabled drivers build config 00:02:28.837 compress/zlib: not in enabled drivers build config 00:02:28.837 regex/*: missing internal dependency, "regexdev" 00:02:28.837 ml/*: missing internal dependency, "mldev" 00:02:28.837 vdpa/ifc: not in enabled drivers build config 00:02:28.837 vdpa/mlx5: not in enabled drivers build config 00:02:28.837 vdpa/nfp: not in enabled drivers build config 00:02:28.837 vdpa/sfc: not in enabled drivers build config 00:02:28.837 event/*: missing internal dependency, "eventdev" 00:02:28.837 baseband/*: missing internal dependency, "bbdev" 00:02:28.837 gpu/*: missing internal dependency, "gpudev" 00:02:28.837 00:02:28.837 00:02:28.837 Build targets in project: 107 00:02:28.837 00:02:28.837 DPDK 24.03.0 00:02:28.837 00:02:28.837 User defined options 00:02:28.837 buildtype : debug 00:02:28.837 default_library : shared 00:02:28.837 libdir : lib 00:02:28.837 prefix : /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build 00:02:28.837 b_sanitize : address 00:02:28.837 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -fPIC -Werror 00:02:28.837 c_link_args : -L/var/jenkins/workspace/vfio-user-phy-autotest/spdk/intel-ipsec-mb/lib 00:02:28.837 cpu_instruction_set: native 00:02:28.837 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:28.837 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:28.837 enable_docs : false 00:02:28.837 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb 00:02:28.837 enable_kmods : false 00:02:28.837 max_lcores : 128 00:02:28.837 tests : false 00:02:28.837 00:02:28.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.414 ninja: Entering directory `/var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp' 00:02:29.414 [1/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.414 [2/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.414 [3/363] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.414 [4/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.414 [5/363] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.414 [6/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.414 [7/363] Linking static target lib/librte_kvargs.a 00:02:29.414 [8/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:29.414 [9/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.414 [10/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.414 [11/363] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.414 [12/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.414 [13/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.414 [14/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.414 [15/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.674 [16/363] Linking static target lib/librte_log.a 00:02:29.674 [17/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.674 [18/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.674 [19/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:29.674 [20/363] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.674 [21/363] Linking static target lib/librte_pci.a 00:02:29.674 [22/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.674 [23/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.674 [24/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.940 [25/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.940 [26/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.940 [27/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.940 [28/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.940 [29/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:29.940 [30/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:29.940 [31/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:29.940 [32/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.940 [33/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.940 [34/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.940 [35/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.940 [36/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.940 [37/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.940 [38/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.940 [39/363] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:29.940 [40/363] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.940 [41/363] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:29.940 [42/363] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:29.940 [43/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:29.940 [44/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.940 [45/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.940 [46/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:29.940 [47/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.940 [48/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:29.940 [49/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:29.940 [50/363] Linking static target lib/librte_meter.a 00:02:29.940 [51/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.940 [52/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.940 [53/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.940 [54/363] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:29.940 [55/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.940 [56/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.204 [57/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.204 [58/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.204 [59/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:30.204 [60/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.204 [61/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.204 [62/363] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:30.204 [63/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.204 [64/363] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.204 [65/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.204 [66/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.204 [67/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.204 [68/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.204 [69/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.204 [70/363] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.204 [71/363] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.204 [72/363] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.204 [73/363] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:30.204 [74/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.204 [75/363] Linking static target lib/librte_telemetry.a 00:02:30.204 [76/363] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:30.204 [77/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.204 [78/363] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.204 [79/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.204 [80/363] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.204 [81/363] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.204 [82/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.204 [83/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:30.204 [84/363] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.204 [85/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.204 [86/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.204 [87/363] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.204 [88/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.204 [89/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.204 [90/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.204 [91/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.204 [92/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.204 [93/363] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.204 [94/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.204 [95/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.204 [96/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.204 [97/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.204 [98/363] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.204 [99/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.204 [100/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.204 [101/363] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.204 [102/363] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.204 [103/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.204 [104/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:30.204 [105/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.204 [106/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:02:30.204 [107/363] Linking static target lib/librte_ring.a 00:02:30.204 [108/363] Linking static target lib/librte_cmdline.a 00:02:30.204 [109/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.204 [110/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.204 [111/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.204 [112/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.204 [113/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.204 [114/363] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.204 [115/363] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.465 [116/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.465 [117/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:02:30.465 [118/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.465 [119/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:30.465 [120/363] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.465 [121/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.465 [122/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.465 [123/363] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.465 [124/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.465 [125/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.465 [126/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.465 [127/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.465 [128/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.465 [129/363] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.465 [130/363] Linking target lib/librte_log.so.24.1 00:02:30.465 [131/363] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.465 [132/363] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.465 [133/363] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:30.465 [134/363] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.465 [135/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.465 [136/363] Linking static target lib/librte_mempool.a 00:02:30.465 [137/363] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.465 [138/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.465 [139/363] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.465 [140/363] Linking static target lib/librte_eal.a 00:02:30.725 [141/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.725 [142/363] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.725 [143/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.725 [144/363] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:30.726 [145/363] Linking static target lib/librte_net.a 00:02:30.726 [146/363] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.726 [147/363] Linking static target lib/librte_timer.a 00:02:30.726 [148/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.726 [149/363] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.726 [150/363] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.726 [151/363] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.726 [152/363] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.726 [153/363] Linking static target lib/librte_rcu.a 00:02:30.726 [154/363] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.726 [155/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:02:30.726 [156/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.726 [157/363] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.726 [158/363] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.726 [159/363] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.726 [160/363] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.726 [161/363] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.726 [162/363] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.726 [163/363] Linking static target lib/librte_dmadev.a 00:02:30.726 [164/363] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.726 [165/363] Linking target lib/librte_kvargs.so.24.1 00:02:30.726 [166/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.988 [167/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.988 [168/363] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.988 [169/363] Linking static target lib/librte_power.a 00:02:30.988 [170/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:02:30.988 [171/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:02:30.988 [172/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:02:30.988 [173/363] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.988 [174/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:02:30.988 [175/363] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:02:30.988 [176/363] Linking static target lib/librte_compressdev.a 00:02:30.988 [177/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:02:30.988 [178/363] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:02:30.988 [179/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:02:30.988 [180/363] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.988 [181/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:02:30.988 [182/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:02:30.988 [183/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:02:30.988 [184/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:02:30.988 [185/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.988 [186/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:02:30.988 [187/363] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.988 [188/363] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.988 [189/363] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.988 [190/363] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.988 [191/363] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.988 [192/363] Linking target lib/librte_telemetry.so.24.1 00:02:30.988 [193/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:02:30.988 [194/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.988 [195/363] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.988 [196/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:02:30.988 [197/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.988 [198/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:02:30.988 [199/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:02:30.988 [200/363] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.989 [201/363] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.989 [202/363] Linking static target lib/librte_reorder.a 00:02:31.249 [203/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:02:31.249 [204/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:02:31.249 [205/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:02:31.249 [206/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:02:31.249 [207/363] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.249 [208/363] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.249 [209/363] Linking static target lib/librte_security.a 00:02:31.249 [210/363] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.249 [211/363] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.249 [212/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:02:31.249 [213/363] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.249 [214/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:02:31.249 [215/363] Linking static target lib/librte_mbuf.a 00:02:31.249 [216/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:31.249 [217/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:02:31.249 [218/363] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:31.249 [219/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:02:31.249 [220/363] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:31.249 [221/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:02:31.249 [222/363] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:31.249 [223/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:02:31.249 [224/363] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:02:31.249 [225/363] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.249 [226/363] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:31.249 [227/363] Linking static target drivers/librte_bus_vdev.a 00:02:31.249 [228/363] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:31.249 [229/363] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:31.249 [230/363] Linking static target drivers/librte_bus_auxiliary.a 00:02:31.249 [231/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:02:31.249 [232/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:02:31.249 [233/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:02:31.249 [234/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:02:31.249 [235/363] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.249 [236/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:02:31.249 [237/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:02:31.249 [238/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:02:31.249 [239/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:02:31.249 [240/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:02:31.249 [241/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:02:31.249 [242/363] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.507 [243/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:02:31.507 [244/363] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.507 [245/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:02:31.507 [246/363] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:31.507 [247/363] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.507 [248/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.507 [249/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:02:31.507 [250/363] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.507 [251/363] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.507 [252/363] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:31.507 [253/363] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:02:31.507 [254/363] Linking static target drivers/librte_bus_pci.a 00:02:31.507 [255/363] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:02:31.507 [256/363] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:31.507 [257/363] Linking static target lib/librte_hash.a 00:02:31.507 [258/363] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.507 [259/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.507 [260/363] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.507 [261/363] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.767 [262/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:02:31.767 [263/363] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:31.767 [264/363] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:31.767 [265/363] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.767 [266/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:02:31.767 [267/363] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:02:31.767 [268/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:02:31.767 [269/363] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:31.767 [270/363] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:31.767 [271/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:02:31.767 [272/363] Linking static target drivers/librte_crypto_mlx5.a 00:02:31.767 [273/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:02:31.767 [274/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.028 [275/363] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.028 [276/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:02:32.028 [277/363] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.028 [278/363] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.028 [279/363] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.028 [280/363] Linking static target drivers/librte_mempool_ring.a 00:02:32.028 [281/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:02:32.028 [282/363] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.028 [283/363] Linking static target lib/librte_cryptodev.a 00:02:32.286 [284/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:02:32.286 [285/363] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.286 [286/363] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:02:32.286 [287/363] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:02:32.286 [288/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:02:32.286 [289/363] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:02:32.286 [290/363] Linking static target drivers/libtmp_rte_common_mlx5.a 00:02:32.286 [291/363] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.553 [292/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:02:32.553 [293/363] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:02:32.553 [294/363] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.553 [295/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:02:32.553 [296/363] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:32.554 [297/363] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:32.554 [298/363] Linking static target lib/librte_ethdev.a 00:02:32.554 [299/363] Linking static target drivers/librte_crypto_ipsec_mb.a 00:02:32.554 [300/363] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:02:32.554 [301/363] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:32.554 [302/363] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:32.554 [303/363] Linking static target drivers/librte_common_mlx5.a 00:02:33.932 [304/363] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.932 [305/363] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.309 [306/363] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:02:35.309 [307/363] Linking static target drivers/libtmp_rte_common_qat.a 00:02:35.309 [308/363] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:35.568 [309/363] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:35.568 [310/363] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:35.568 [311/363] Linking static target drivers/librte_common_qat.a 00:02:36.943 [312/363] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.943 [313/363] Linking static target lib/librte_vhost.a 00:02:37.511 [314/363] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.885 [315/363] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.262 [316/363] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.262 [317/363] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.521 [318/363] Linking target lib/librte_eal.so.24.1 00:02:40.521 [319/363] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:40.521 [320/363] Linking target lib/librte_pci.so.24.1 00:02:40.521 [321/363] Linking target lib/librte_dmadev.so.24.1 00:02:40.521 [322/363] Linking target drivers/librte_bus_vdev.so.24.1 00:02:40.521 [323/363] Linking target lib/librte_ring.so.24.1 00:02:40.521 [324/363] Linking target lib/librte_meter.so.24.1 00:02:40.521 [325/363] Linking target lib/librte_timer.so.24.1 00:02:40.521 [326/363] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:40.779 [327/363] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:40.779 [328/363] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:40.779 [329/363] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:40.779 [330/363] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:40.779 [331/363] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:40.779 [332/363] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:40.779 [333/363] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:40.779 [334/363] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.779 [335/363] Linking target lib/librte_mempool.so.24.1 00:02:40.779 [336/363] Linking target lib/librte_rcu.so.24.1 00:02:40.779 [337/363] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:40.779 [338/363] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.779 [339/363] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.038 [340/363] Linking target lib/librte_mbuf.so.24.1 00:02:41.038 [341/363] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.038 [342/363] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.038 [343/363] Linking target lib/librte_compressdev.so.24.1 00:02:41.038 [344/363] Linking target lib/librte_net.so.24.1 00:02:41.038 [345/363] Linking target lib/librte_reorder.so.24.1 00:02:41.038 [346/363] Linking target lib/librte_cryptodev.so.24.1 00:02:41.297 [347/363] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:41.297 [348/363] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:41.297 [349/363] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:41.297 [350/363] Linking target lib/librte_security.so.24.1 00:02:41.297 [351/363] Linking target lib/librte_cmdline.so.24.1 00:02:41.297 [352/363] Linking target lib/librte_hash.so.24.1 00:02:41.297 [353/363] Linking target lib/librte_ethdev.so.24.1 00:02:41.556 [354/363] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:41.556 [355/363] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:41.556 [356/363] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:41.556 [357/363] Linking target lib/librte_power.so.24.1 00:02:41.556 [358/363] Linking target drivers/librte_common_mlx5.so.24.1 00:02:41.556 [359/363] Linking target lib/librte_vhost.so.24.1 00:02:41.556 [360/363] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:41.556 [361/363] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:41.556 [362/363] Linking target drivers/librte_common_qat.so.24.1 00:02:41.813 [363/363] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:41.813 INFO: autodetecting backend as ninja 00:02:41.813 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:42.748 CC lib/log/log.o 00:02:42.748 CC lib/log/log_flags.o 00:02:42.748 CC lib/log/log_deprecated.o 00:02:42.748 CC lib/ut_mock/mock.o 00:02:42.748 CC lib/ut/ut.o 00:02:43.007 LIB libspdk_log.a 00:02:43.007 LIB libspdk_ut.a 00:02:43.007 LIB libspdk_ut_mock.a 00:02:43.007 SO libspdk_log.so.7.0 00:02:43.007 SO libspdk_ut.so.2.0 00:02:43.007 SO libspdk_ut_mock.so.6.0 00:02:43.007 SYMLINK libspdk_log.so 00:02:43.007 SYMLINK libspdk_ut.so 00:02:43.007 SYMLINK libspdk_ut_mock.so 00:02:43.265 CXX lib/trace_parser/trace.o 00:02:43.265 CC lib/dma/dma.o 00:02:43.265 CC lib/ioat/ioat.o 00:02:43.265 CC lib/util/base64.o 00:02:43.265 CC lib/util/bit_array.o 00:02:43.265 CC lib/util/cpuset.o 00:02:43.265 CC lib/util/crc16.o 00:02:43.265 CC lib/util/crc32.o 00:02:43.265 CC lib/util/crc32c.o 00:02:43.265 CC lib/util/crc32_ieee.o 00:02:43.265 CC lib/util/crc64.o 00:02:43.265 CC lib/util/dif.o 00:02:43.265 CC lib/util/fd.o 00:02:43.265 CC lib/util/fd_group.o 00:02:43.265 CC lib/util/file.o 00:02:43.265 CC lib/util/hexlify.o 00:02:43.265 CC lib/util/iov.o 00:02:43.265 CC lib/util/net.o 00:02:43.265 CC lib/util/math.o 00:02:43.265 CC lib/util/pipe.o 00:02:43.265 CC lib/util/string.o 00:02:43.265 CC lib/util/strerror_tls.o 00:02:43.265 CC lib/util/uuid.o 00:02:43.265 CC lib/util/xor.o 00:02:43.265 CC lib/util/zipf.o 00:02:43.265 CC lib/util/md5.o 00:02:43.523 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.523 CC lib/vfio_user/host/vfio_user.o 00:02:43.523 LIB libspdk_dma.a 00:02:43.523 SO libspdk_dma.so.5.0 00:02:43.523 SYMLINK libspdk_dma.so 00:02:43.523 LIB libspdk_ioat.a 00:02:43.782 SO libspdk_ioat.so.7.0 00:02:43.782 LIB libspdk_vfio_user.a 00:02:43.782 SYMLINK libspdk_ioat.so 00:02:43.782 SO libspdk_vfio_user.so.5.0 00:02:43.782 SYMLINK libspdk_vfio_user.so 00:02:44.040 LIB libspdk_util.a 00:02:44.040 SO libspdk_util.so.10.0 00:02:44.040 LIB libspdk_trace_parser.a 00:02:44.040 SO libspdk_trace_parser.so.6.0 00:02:44.040 SYMLINK libspdk_util.so 00:02:44.298 SYMLINK libspdk_trace_parser.so 00:02:44.298 CC lib/env_dpdk/env.o 00:02:44.299 CC lib/env_dpdk/memory.o 00:02:44.299 CC lib/env_dpdk/pci.o 00:02:44.299 CC lib/env_dpdk/init.o 00:02:44.299 CC lib/env_dpdk/pci_ioat.o 00:02:44.299 CC lib/env_dpdk/threads.o 00:02:44.299 CC lib/env_dpdk/pci_virtio.o 00:02:44.299 CC lib/env_dpdk/pci_vmd.o 00:02:44.299 CC lib/env_dpdk/pci_idxd.o 00:02:44.299 CC lib/env_dpdk/pci_event.o 00:02:44.299 CC lib/env_dpdk/pci_dpdk.o 00:02:44.299 CC lib/env_dpdk/sigbus_handler.o 00:02:44.299 CC lib/rdma_utils/rdma_utils.o 00:02:44.299 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.299 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.299 CC lib/conf/conf.o 00:02:44.299 CC lib/vmd/vmd.o 00:02:44.299 CC lib/vmd/led.o 00:02:44.299 CC lib/idxd/idxd_user.o 00:02:44.299 CC lib/idxd/idxd.o 00:02:44.299 CC lib/idxd/idxd_kernel.o 00:02:44.299 CC lib/rdma_provider/common.o 00:02:44.299 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:44.299 CC lib/json/json_util.o 00:02:44.299 CC lib/json/json_parse.o 00:02:44.299 CC lib/json/json_write.o 00:02:44.557 LIB libspdk_rdma_provider.a 00:02:44.557 LIB libspdk_conf.a 00:02:44.557 SO libspdk_rdma_provider.so.6.0 00:02:44.557 LIB libspdk_rdma_utils.a 00:02:44.823 SO libspdk_conf.so.6.0 00:02:44.823 SO libspdk_rdma_utils.so.1.0 00:02:44.823 SYMLINK libspdk_rdma_provider.so 00:02:44.823 LIB libspdk_json.a 00:02:44.823 SYMLINK libspdk_conf.so 00:02:44.823 SYMLINK libspdk_rdma_utils.so 00:02:44.824 SO libspdk_json.so.6.0 00:02:44.824 SYMLINK libspdk_json.so 00:02:45.089 LIB libspdk_idxd.a 00:02:45.089 LIB libspdk_vmd.a 00:02:45.089 SO libspdk_idxd.so.12.1 00:02:45.089 SO libspdk_vmd.so.6.0 00:02:45.089 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.089 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.089 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.089 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.089 SYMLINK libspdk_idxd.so 00:02:45.089 SYMLINK libspdk_vmd.so 00:02:45.348 LIB libspdk_jsonrpc.a 00:02:45.348 SO libspdk_jsonrpc.so.6.0 00:02:45.607 SYMLINK libspdk_jsonrpc.so 00:02:45.866 LIB libspdk_env_dpdk.a 00:02:45.866 CC lib/rpc/rpc.o 00:02:45.866 SO libspdk_env_dpdk.so.15.0 00:02:45.866 SYMLINK libspdk_env_dpdk.so 00:02:45.866 LIB libspdk_rpc.a 00:02:46.125 SO libspdk_rpc.so.6.0 00:02:46.125 SYMLINK libspdk_rpc.so 00:02:46.384 CC lib/keyring/keyring.o 00:02:46.384 CC lib/keyring/keyring_rpc.o 00:02:46.384 CC lib/notify/notify.o 00:02:46.384 CC lib/notify/notify_rpc.o 00:02:46.384 CC lib/trace/trace.o 00:02:46.384 CC lib/trace/trace_flags.o 00:02:46.384 CC lib/trace/trace_rpc.o 00:02:46.643 LIB libspdk_notify.a 00:02:46.643 SO libspdk_notify.so.6.0 00:02:46.643 LIB libspdk_keyring.a 00:02:46.643 LIB libspdk_trace.a 00:02:46.643 SO libspdk_keyring.so.2.0 00:02:46.643 SYMLINK libspdk_notify.so 00:02:46.643 SO libspdk_trace.so.11.0 00:02:46.643 SYMLINK libspdk_keyring.so 00:02:46.643 SYMLINK libspdk_trace.so 00:02:46.912 CC lib/thread/thread.o 00:02:46.912 CC lib/thread/iobuf.o 00:02:46.912 CC lib/sock/sock.o 00:02:46.912 CC lib/sock/sock_rpc.o 00:02:47.486 LIB libspdk_sock.a 00:02:47.486 SO libspdk_sock.so.10.0 00:02:47.486 SYMLINK libspdk_sock.so 00:02:47.744 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.744 CC lib/nvme/nvme_ctrlr.o 00:02:47.744 CC lib/nvme/nvme_fabric.o 00:02:47.744 CC lib/nvme/nvme_ns.o 00:02:47.744 CC lib/nvme/nvme_ns_cmd.o 00:02:47.744 CC lib/nvme/nvme_qpair.o 00:02:47.744 CC lib/nvme/nvme_pcie_common.o 00:02:47.744 CC lib/nvme/nvme_pcie.o 00:02:47.744 CC lib/nvme/nvme.o 00:02:47.744 CC lib/nvme/nvme_quirks.o 00:02:47.744 CC lib/nvme/nvme_transport.o 00:02:47.744 CC lib/nvme/nvme_discovery.o 00:02:47.744 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.744 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.744 CC lib/nvme/nvme_opal.o 00:02:47.744 CC lib/nvme/nvme_tcp.o 00:02:47.744 CC lib/nvme/nvme_io_msg.o 00:02:47.744 CC lib/nvme/nvme_poll_group.o 00:02:47.744 CC lib/nvme/nvme_zns.o 00:02:47.744 CC lib/nvme/nvme_stubs.o 00:02:47.744 CC lib/nvme/nvme_auth.o 00:02:47.744 CC lib/nvme/nvme_cuse.o 00:02:47.744 CC lib/nvme/nvme_vfio_user.o 00:02:47.744 CC lib/nvme/nvme_rdma.o 00:02:48.374 LIB libspdk_thread.a 00:02:48.374 SO libspdk_thread.so.10.2 00:02:48.633 SYMLINK libspdk_thread.so 00:02:48.892 CC lib/blob/blobstore.o 00:02:48.892 CC lib/blob/request.o 00:02:48.892 CC lib/blob/zeroes.o 00:02:48.892 CC lib/blob/blob_bs_dev.o 00:02:48.892 CC lib/vfu_tgt/tgt_endpoint.o 00:02:48.892 CC lib/fsdev/fsdev.o 00:02:48.892 CC lib/vfu_tgt/tgt_rpc.o 00:02:48.892 CC lib/fsdev/fsdev_io.o 00:02:48.892 CC lib/fsdev/fsdev_rpc.o 00:02:48.892 CC lib/accel/accel.o 00:02:48.892 CC lib/accel/accel_sw.o 00:02:48.892 CC lib/init/json_config.o 00:02:48.892 CC lib/accel/accel_rpc.o 00:02:48.892 CC lib/init/subsystem.o 00:02:48.892 CC lib/init/rpc.o 00:02:48.892 CC lib/init/subsystem_rpc.o 00:02:48.892 CC lib/virtio/virtio.o 00:02:48.892 CC lib/virtio/virtio_vhost_user.o 00:02:48.892 CC lib/virtio/virtio_vfio_user.o 00:02:48.892 CC lib/virtio/virtio_pci.o 00:02:49.150 LIB libspdk_init.a 00:02:49.150 SO libspdk_init.so.6.0 00:02:49.150 LIB libspdk_vfu_tgt.a 00:02:49.150 LIB libspdk_virtio.a 00:02:49.150 SO libspdk_vfu_tgt.so.3.0 00:02:49.150 SYMLINK libspdk_init.so 00:02:49.150 SO libspdk_virtio.so.7.0 00:02:49.408 SYMLINK libspdk_vfu_tgt.so 00:02:49.408 SYMLINK libspdk_virtio.so 00:02:49.408 LIB libspdk_fsdev.a 00:02:49.408 SO libspdk_fsdev.so.1.0 00:02:49.408 CC lib/event/app.o 00:02:49.408 CC lib/event/reactor.o 00:02:49.667 CC lib/event/log_rpc.o 00:02:49.667 CC lib/event/app_rpc.o 00:02:49.667 CC lib/event/scheduler_static.o 00:02:49.667 SYMLINK libspdk_fsdev.so 00:02:49.925 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.925 LIB libspdk_nvme.a 00:02:49.925 LIB libspdk_accel.a 00:02:49.925 SO libspdk_accel.so.16.0 00:02:49.925 LIB libspdk_event.a 00:02:49.925 SO libspdk_nvme.so.14.0 00:02:49.925 SYMLINK libspdk_accel.so 00:02:49.925 SO libspdk_event.so.15.0 00:02:50.184 SYMLINK libspdk_event.so 00:02:50.184 SYMLINK libspdk_nvme.so 00:02:50.443 CC lib/bdev/bdev_rpc.o 00:02:50.443 CC lib/bdev/bdev.o 00:02:50.443 CC lib/bdev/bdev_zone.o 00:02:50.443 CC lib/bdev/part.o 00:02:50.443 CC lib/bdev/scsi_nvme.o 00:02:50.443 LIB libspdk_fuse_dispatcher.a 00:02:50.443 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.443 SYMLINK libspdk_fuse_dispatcher.so 00:02:51.823 LIB libspdk_blob.a 00:02:51.823 SO libspdk_blob.so.11.0 00:02:51.823 SYMLINK libspdk_blob.so 00:02:52.391 CC lib/lvol/lvol.o 00:02:52.391 CC lib/blobfs/blobfs.o 00:02:52.391 CC lib/blobfs/tree.o 00:02:52.649 LIB libspdk_bdev.a 00:02:52.649 SO libspdk_bdev.so.17.0 00:02:52.907 SYMLINK libspdk_bdev.so 00:02:52.907 LIB libspdk_blobfs.a 00:02:52.907 SO libspdk_blobfs.so.10.0 00:02:53.166 LIB libspdk_lvol.a 00:02:53.166 SYMLINK libspdk_blobfs.so 00:02:53.166 CC lib/scsi/dev.o 00:02:53.166 CC lib/scsi/lun.o 00:02:53.166 SO libspdk_lvol.so.10.0 00:02:53.166 CC lib/scsi/port.o 00:02:53.166 CC lib/scsi/scsi_bdev.o 00:02:53.166 CC lib/scsi/scsi.o 00:02:53.166 CC lib/ftl/ftl_core.o 00:02:53.166 CC lib/ftl/ftl_init.o 00:02:53.166 CC lib/scsi/scsi_pr.o 00:02:53.166 CC lib/ftl/ftl_layout.o 00:02:53.166 CC lib/scsi/scsi_rpc.o 00:02:53.166 CC lib/nvmf/ctrlr.o 00:02:53.166 CC lib/ftl/ftl_debug.o 00:02:53.166 CC lib/scsi/task.o 00:02:53.166 CC lib/nvmf/ctrlr_bdev.o 00:02:53.166 CC lib/ftl/ftl_io.o 00:02:53.166 CC lib/nvmf/ctrlr_discovery.o 00:02:53.166 CC lib/ftl/ftl_sb.o 00:02:53.166 CC lib/nvmf/subsystem.o 00:02:53.166 CC lib/ftl/ftl_l2p.o 00:02:53.166 CC lib/nvmf/nvmf.o 00:02:53.166 CC lib/nvmf/nvmf_rpc.o 00:02:53.166 CC lib/ftl/ftl_l2p_flat.o 00:02:53.166 CC lib/ftl/ftl_nv_cache.o 00:02:53.166 CC lib/ublk/ublk.o 00:02:53.166 CC lib/nvmf/transport.o 00:02:53.166 CC lib/ublk/ublk_rpc.o 00:02:53.166 CC lib/ftl/ftl_band_ops.o 00:02:53.166 CC lib/nvmf/tcp.o 00:02:53.166 CC lib/ftl/ftl_band.o 00:02:53.166 CC lib/nvmf/stubs.o 00:02:53.166 CC lib/nvmf/mdns_server.o 00:02:53.166 CC lib/ftl/ftl_writer.o 00:02:53.166 CC lib/ftl/ftl_rq.o 00:02:53.166 CC lib/ftl/ftl_l2p_cache.o 00:02:53.166 CC lib/nvmf/rdma.o 00:02:53.166 CC lib/ftl/ftl_reloc.o 00:02:53.166 CC lib/nvmf/vfio_user.o 00:02:53.166 CC lib/nvmf/auth.o 00:02:53.166 CC lib/ftl/ftl_p2l.o 00:02:53.166 CC lib/ftl/ftl_p2l_log.o 00:02:53.166 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.166 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.166 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.166 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.166 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.167 CC lib/nbd/nbd.o 00:02:53.167 CC lib/nbd/nbd_rpc.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.167 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.167 CC lib/ftl/utils/ftl_conf.o 00:02:53.167 CC lib/ftl/utils/ftl_md.o 00:02:53.167 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.167 CC lib/ftl/utils/ftl_mempool.o 00:02:53.167 CC lib/ftl/utils/ftl_property.o 00:02:53.167 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.167 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.167 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.167 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.167 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.167 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.167 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.167 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.167 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.167 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.167 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.167 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.167 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.167 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.167 CC lib/ftl/base/ftl_base_dev.o 00:02:53.167 CC lib/ftl/ftl_trace.o 00:02:53.167 SYMLINK libspdk_lvol.so 00:02:53.736 LIB libspdk_nbd.a 00:02:53.736 SO libspdk_nbd.so.7.0 00:02:53.736 SYMLINK libspdk_nbd.so 00:02:53.994 LIB libspdk_scsi.a 00:02:53.995 SO libspdk_scsi.so.9.0 00:02:54.257 LIB libspdk_ublk.a 00:02:54.257 SYMLINK libspdk_scsi.so 00:02:54.257 SO libspdk_ublk.so.3.0 00:02:54.257 SYMLINK libspdk_ublk.so 00:02:54.257 LIB libspdk_ftl.a 00:02:54.523 SO libspdk_ftl.so.9.0 00:02:54.523 CC lib/iscsi/init_grp.o 00:02:54.523 CC lib/iscsi/conn.o 00:02:54.523 CC lib/iscsi/iscsi.o 00:02:54.523 CC lib/iscsi/param.o 00:02:54.523 CC lib/iscsi/portal_grp.o 00:02:54.523 CC lib/iscsi/tgt_node.o 00:02:54.523 CC lib/iscsi/iscsi_subsystem.o 00:02:54.523 CC lib/iscsi/iscsi_rpc.o 00:02:54.523 CC lib/iscsi/task.o 00:02:54.523 CC lib/vhost/vhost.o 00:02:54.523 CC lib/vhost/vhost_rpc.o 00:02:54.523 CC lib/vhost/vhost_scsi.o 00:02:54.523 CC lib/vhost/vhost_blk.o 00:02:54.523 CC lib/vhost/rte_vhost_user.o 00:02:54.782 SYMLINK libspdk_ftl.so 00:02:55.359 LIB libspdk_vhost.a 00:02:55.359 SO libspdk_vhost.so.8.0 00:02:55.626 SYMLINK libspdk_vhost.so 00:02:55.626 LIB libspdk_nvmf.a 00:02:55.626 SO libspdk_nvmf.so.19.0 00:02:55.626 LIB libspdk_iscsi.a 00:02:55.885 SO libspdk_iscsi.so.8.0 00:02:55.885 SYMLINK libspdk_nvmf.so 00:02:55.885 SYMLINK libspdk_iscsi.so 00:02:56.451 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.451 CC module/vfu_device/vfu_virtio.o 00:02:56.451 CC module/vfu_device/vfu_virtio_blk.o 00:02:56.451 CC module/vfu_device/vfu_virtio_rpc.o 00:02:56.451 CC module/vfu_device/vfu_virtio_scsi.o 00:02:56.451 CC module/vfu_device/vfu_virtio_fs.o 00:02:56.451 CC module/accel/ioat/accel_ioat.o 00:02:56.451 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.451 CC module/accel/iaa/accel_iaa.o 00:02:56.451 CC module/keyring/linux/keyring.o 00:02:56.451 CC module/keyring/linux/keyring_rpc.o 00:02:56.451 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.451 CC module/blob/bdev/blob_bdev.o 00:02:56.451 CC module/accel/error/accel_error_rpc.o 00:02:56.451 CC module/accel/error/accel_error.o 00:02:56.451 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:56.451 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:56.451 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:56.451 LIB libspdk_env_dpdk_rpc.a 00:02:56.451 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.451 CC module/sock/posix/posix.o 00:02:56.451 CC module/keyring/file/keyring_rpc.o 00:02:56.451 CC module/keyring/file/keyring.o 00:02:56.451 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:56.451 CC module/fsdev/aio/fsdev_aio.o 00:02:56.451 CC module/fsdev/aio/linux_aio_mgr.o 00:02:56.451 CC module/scheduler/gscheduler/gscheduler.o 00:02:56.451 CC module/accel/dsa/accel_dsa.o 00:02:56.451 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.716 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.716 SYMLINK libspdk_env_dpdk_rpc.so 00:02:56.716 LIB libspdk_scheduler_dpdk_governor.a 00:02:56.716 LIB libspdk_keyring_linux.a 00:02:56.716 LIB libspdk_keyring_file.a 00:02:56.716 LIB libspdk_scheduler_gscheduler.a 00:02:56.716 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.716 LIB libspdk_accel_ioat.a 00:02:56.716 SO libspdk_keyring_linux.so.1.0 00:02:56.716 LIB libspdk_accel_error.a 00:02:56.716 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.716 SO libspdk_accel_ioat.so.6.0 00:02:56.716 LIB libspdk_scheduler_dynamic.a 00:02:56.716 SO libspdk_keyring_file.so.2.0 00:02:56.716 LIB libspdk_accel_iaa.a 00:02:56.716 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.716 SO libspdk_accel_error.so.2.0 00:02:56.716 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.716 SYMLINK libspdk_keyring_linux.so 00:02:56.716 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.716 SO libspdk_accel_iaa.so.3.0 00:02:56.982 SYMLINK libspdk_accel_ioat.so 00:02:56.982 SYMLINK libspdk_keyring_file.so 00:02:56.982 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.982 SYMLINK libspdk_accel_error.so 00:02:56.982 LIB libspdk_blob_bdev.a 00:02:56.982 SYMLINK libspdk_accel_iaa.so 00:02:56.982 SO libspdk_blob_bdev.so.11.0 00:02:56.982 LIB libspdk_accel_dsa.a 00:02:56.982 SO libspdk_accel_dsa.so.5.0 00:02:56.982 SYMLINK libspdk_blob_bdev.so 00:02:56.982 SYMLINK libspdk_accel_dsa.so 00:02:57.241 LIB libspdk_vfu_device.a 00:02:57.241 SO libspdk_vfu_device.so.3.0 00:02:57.241 SYMLINK libspdk_vfu_device.so 00:02:57.241 LIB libspdk_fsdev_aio.a 00:02:57.241 SO libspdk_fsdev_aio.so.1.0 00:02:57.241 CC module/bdev/gpt/gpt.o 00:02:57.241 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.241 LIB libspdk_sock_posix.a 00:02:57.500 CC module/bdev/malloc/bdev_malloc.o 00:02:57.500 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.500 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.500 CC module/bdev/error/vbdev_error.o 00:02:57.500 CC module/bdev/nvme/bdev_nvme.o 00:02:57.500 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.500 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.500 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.500 CC module/bdev/aio/bdev_aio.o 00:02:57.500 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.500 CC module/bdev/nvme/nvme_rpc.o 00:02:57.500 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.500 CC module/bdev/nvme/vbdev_opal.o 00:02:57.500 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.500 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.500 CC module/bdev/null/bdev_null.o 00:02:57.500 CC module/bdev/split/vbdev_split.o 00:02:57.500 CC module/bdev/null/bdev_null_rpc.o 00:02:57.500 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.500 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.500 CC module/bdev/crypto/vbdev_crypto.o 00:02:57.500 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.500 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.500 CC module/bdev/delay/vbdev_delay.o 00:02:57.500 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:57.500 CC module/bdev/raid/bdev_raid.o 00:02:57.500 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.500 CC module/bdev/raid/raid0.o 00:02:57.500 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.500 CC module/bdev/raid/concat.o 00:02:57.500 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.500 CC module/bdev/raid/raid1.o 00:02:57.500 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.500 SO libspdk_sock_posix.so.6.0 00:02:57.500 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.500 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.500 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.500 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.500 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.500 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.500 CC module/bdev/ftl/bdev_ftl.o 00:02:57.500 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.500 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.500 SYMLINK libspdk_fsdev_aio.so 00:02:57.500 SYMLINK libspdk_sock_posix.so 00:02:57.759 LIB libspdk_blobfs_bdev.a 00:02:57.759 LIB libspdk_bdev_split.a 00:02:57.759 SO libspdk_blobfs_bdev.so.6.0 00:02:57.759 SO libspdk_bdev_split.so.6.0 00:02:57.759 LIB libspdk_bdev_gpt.a 00:02:57.759 LIB libspdk_bdev_null.a 00:02:57.759 SYMLINK libspdk_blobfs_bdev.so 00:02:57.759 SO libspdk_bdev_gpt.so.6.0 00:02:57.759 LIB libspdk_bdev_error.a 00:02:57.759 SO libspdk_bdev_null.so.6.0 00:02:57.759 SYMLINK libspdk_bdev_split.so 00:02:57.759 LIB libspdk_bdev_aio.a 00:02:57.759 LIB libspdk_bdev_zone_block.a 00:02:57.759 LIB libspdk_bdev_passthru.a 00:02:57.759 LIB libspdk_bdev_ftl.a 00:02:57.759 SO libspdk_bdev_error.so.6.0 00:02:57.759 LIB libspdk_bdev_malloc.a 00:02:57.759 LIB libspdk_bdev_crypto.a 00:02:57.759 SYMLINK libspdk_bdev_gpt.so 00:02:57.759 SO libspdk_bdev_aio.so.6.0 00:02:57.759 SO libspdk_bdev_zone_block.so.6.0 00:02:57.759 LIB libspdk_bdev_delay.a 00:02:57.759 SO libspdk_bdev_passthru.so.6.0 00:02:57.759 SO libspdk_bdev_ftl.so.6.0 00:02:57.759 SYMLINK libspdk_bdev_null.so 00:02:57.759 SO libspdk_bdev_malloc.so.6.0 00:02:57.759 SO libspdk_bdev_crypto.so.6.0 00:02:58.017 SYMLINK libspdk_bdev_error.so 00:02:58.017 SO libspdk_bdev_delay.so.6.0 00:02:58.017 LIB libspdk_bdev_iscsi.a 00:02:58.017 SYMLINK libspdk_bdev_zone_block.so 00:02:58.017 SYMLINK libspdk_bdev_aio.so 00:02:58.017 SYMLINK libspdk_bdev_passthru.so 00:02:58.017 SO libspdk_bdev_iscsi.so.6.0 00:02:58.017 SYMLINK libspdk_bdev_ftl.so 00:02:58.017 SYMLINK libspdk_bdev_malloc.so 00:02:58.017 SYMLINK libspdk_bdev_crypto.so 00:02:58.017 SYMLINK libspdk_bdev_delay.so 00:02:58.017 LIB libspdk_bdev_lvol.a 00:02:58.017 LIB libspdk_accel_dpdk_cryptodev.a 00:02:58.017 SYMLINK libspdk_bdev_iscsi.so 00:02:58.017 LIB libspdk_bdev_virtio.a 00:02:58.017 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:58.017 SO libspdk_bdev_lvol.so.6.0 00:02:58.017 SO libspdk_bdev_virtio.so.6.0 00:02:58.017 SYMLINK libspdk_bdev_lvol.so 00:02:58.017 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:58.017 SYMLINK libspdk_bdev_virtio.so 00:02:58.588 LIB libspdk_bdev_raid.a 00:02:58.588 SO libspdk_bdev_raid.so.6.0 00:02:58.588 SYMLINK libspdk_bdev_raid.so 00:02:59.525 LIB libspdk_bdev_nvme.a 00:02:59.784 SO libspdk_bdev_nvme.so.7.0 00:02:59.784 SYMLINK libspdk_bdev_nvme.so 00:03:00.355 CC module/event/subsystems/keyring/keyring.o 00:03:00.355 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.355 CC module/event/subsystems/sock/sock.o 00:03:00.355 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.355 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.355 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.355 CC module/event/subsystems/vmd/vmd.o 00:03:00.355 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.355 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.355 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:00.618 LIB libspdk_event_keyring.a 00:03:00.618 LIB libspdk_event_fsdev.a 00:03:00.618 LIB libspdk_event_vhost_blk.a 00:03:00.618 LIB libspdk_event_scheduler.a 00:03:00.618 LIB libspdk_event_sock.a 00:03:00.618 LIB libspdk_event_vmd.a 00:03:00.618 LIB libspdk_event_iobuf.a 00:03:00.618 LIB libspdk_event_vfu_tgt.a 00:03:00.618 SO libspdk_event_fsdev.so.1.0 00:03:00.618 SO libspdk_event_keyring.so.1.0 00:03:00.618 SO libspdk_event_vhost_blk.so.3.0 00:03:00.618 SO libspdk_event_vmd.so.6.0 00:03:00.618 SO libspdk_event_scheduler.so.4.0 00:03:00.618 SO libspdk_event_sock.so.5.0 00:03:00.618 SO libspdk_event_iobuf.so.3.0 00:03:00.618 SO libspdk_event_vfu_tgt.so.3.0 00:03:00.618 SYMLINK libspdk_event_fsdev.so 00:03:00.618 SYMLINK libspdk_event_keyring.so 00:03:00.618 SYMLINK libspdk_event_vhost_blk.so 00:03:00.618 SYMLINK libspdk_event_scheduler.so 00:03:00.618 SYMLINK libspdk_event_sock.so 00:03:00.618 SYMLINK libspdk_event_vmd.so 00:03:00.618 SYMLINK libspdk_event_iobuf.so 00:03:00.618 SYMLINK libspdk_event_vfu_tgt.so 00:03:00.876 CC module/event/subsystems/accel/accel.o 00:03:01.135 LIB libspdk_event_accel.a 00:03:01.135 SO libspdk_event_accel.so.6.0 00:03:01.135 SYMLINK libspdk_event_accel.so 00:03:01.393 CC module/event/subsystems/bdev/bdev.o 00:03:01.652 LIB libspdk_event_bdev.a 00:03:01.652 SO libspdk_event_bdev.so.6.0 00:03:01.652 SYMLINK libspdk_event_bdev.so 00:03:01.911 CC module/event/subsystems/ublk/ublk.o 00:03:02.170 CC module/event/subsystems/scsi/scsi.o 00:03:02.170 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.170 CC module/event/subsystems/nbd/nbd.o 00:03:02.170 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.170 LIB libspdk_event_ublk.a 00:03:02.170 SO libspdk_event_ublk.so.3.0 00:03:02.170 LIB libspdk_event_nbd.a 00:03:02.170 LIB libspdk_event_scsi.a 00:03:02.170 SYMLINK libspdk_event_ublk.so 00:03:02.170 SO libspdk_event_nbd.so.6.0 00:03:02.170 SO libspdk_event_scsi.so.6.0 00:03:02.170 LIB libspdk_event_nvmf.a 00:03:02.170 SYMLINK libspdk_event_nbd.so 00:03:02.170 SYMLINK libspdk_event_scsi.so 00:03:02.428 SO libspdk_event_nvmf.so.6.0 00:03:02.428 SYMLINK libspdk_event_nvmf.so 00:03:02.686 CC module/event/subsystems/iscsi/iscsi.o 00:03:02.686 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:02.686 LIB libspdk_event_vhost_scsi.a 00:03:02.686 LIB libspdk_event_iscsi.a 00:03:02.686 SO libspdk_event_vhost_scsi.so.3.0 00:03:02.686 SO libspdk_event_iscsi.so.6.0 00:03:02.945 SYMLINK libspdk_event_vhost_scsi.so 00:03:02.945 SYMLINK libspdk_event_iscsi.so 00:03:02.945 SO libspdk.so.6.0 00:03:02.945 SYMLINK libspdk.so 00:03:03.204 CC test/rpc_client/rpc_client_test.o 00:03:03.204 TEST_HEADER include/spdk/assert.h 00:03:03.204 TEST_HEADER include/spdk/accel_module.h 00:03:03.204 TEST_HEADER include/spdk/accel.h 00:03:03.204 TEST_HEADER include/spdk/base64.h 00:03:03.204 TEST_HEADER include/spdk/bdev.h 00:03:03.204 TEST_HEADER include/spdk/barrier.h 00:03:03.204 TEST_HEADER include/spdk/bdev_module.h 00:03:03.204 TEST_HEADER include/spdk/bit_array.h 00:03:03.204 TEST_HEADER include/spdk/bdev_zone.h 00:03:03.204 TEST_HEADER include/spdk/blob_bdev.h 00:03:03.204 TEST_HEADER include/spdk/bit_pool.h 00:03:03.204 TEST_HEADER include/spdk/blobfs.h 00:03:03.204 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:03.204 TEST_HEADER include/spdk/blob.h 00:03:03.204 TEST_HEADER include/spdk/config.h 00:03:03.204 TEST_HEADER include/spdk/conf.h 00:03:03.204 TEST_HEADER include/spdk/cpuset.h 00:03:03.204 TEST_HEADER include/spdk/crc64.h 00:03:03.204 TEST_HEADER include/spdk/crc32.h 00:03:03.204 TEST_HEADER include/spdk/crc16.h 00:03:03.204 TEST_HEADER include/spdk/dif.h 00:03:03.204 TEST_HEADER include/spdk/endian.h 00:03:03.204 TEST_HEADER include/spdk/dma.h 00:03:03.204 TEST_HEADER include/spdk/env_dpdk.h 00:03:03.204 TEST_HEADER include/spdk/env.h 00:03:03.204 TEST_HEADER include/spdk/fd_group.h 00:03:03.204 CXX app/trace/trace.o 00:03:03.204 TEST_HEADER include/spdk/event.h 00:03:03.204 CC app/spdk_nvme_perf/perf.o 00:03:03.204 TEST_HEADER include/spdk/file.h 00:03:03.204 TEST_HEADER include/spdk/fsdev.h 00:03:03.204 TEST_HEADER include/spdk/fd.h 00:03:03.204 TEST_HEADER include/spdk/fsdev_module.h 00:03:03.204 TEST_HEADER include/spdk/ftl.h 00:03:03.204 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:03.204 CC app/trace_record/trace_record.o 00:03:03.471 TEST_HEADER include/spdk/gpt_spec.h 00:03:03.471 TEST_HEADER include/spdk/hexlify.h 00:03:03.471 TEST_HEADER include/spdk/histogram_data.h 00:03:03.471 TEST_HEADER include/spdk/idxd.h 00:03:03.471 TEST_HEADER include/spdk/init.h 00:03:03.471 TEST_HEADER include/spdk/idxd_spec.h 00:03:03.471 TEST_HEADER include/spdk/ioat.h 00:03:03.471 TEST_HEADER include/spdk/ioat_spec.h 00:03:03.471 TEST_HEADER include/spdk/iscsi_spec.h 00:03:03.471 CC app/spdk_top/spdk_top.o 00:03:03.471 TEST_HEADER include/spdk/json.h 00:03:03.471 TEST_HEADER include/spdk/jsonrpc.h 00:03:03.471 TEST_HEADER include/spdk/keyring.h 00:03:03.471 TEST_HEADER include/spdk/likely.h 00:03:03.471 TEST_HEADER include/spdk/keyring_module.h 00:03:03.471 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.471 CC app/spdk_nvme_identify/identify.o 00:03:03.471 TEST_HEADER include/spdk/log.h 00:03:03.471 TEST_HEADER include/spdk/lvol.h 00:03:03.471 TEST_HEADER include/spdk/memory.h 00:03:03.471 TEST_HEADER include/spdk/md5.h 00:03:03.471 TEST_HEADER include/spdk/nbd.h 00:03:03.471 TEST_HEADER include/spdk/mmio.h 00:03:03.471 TEST_HEADER include/spdk/net.h 00:03:03.471 TEST_HEADER include/spdk/nvme.h 00:03:03.471 CC app/spdk_lspci/spdk_lspci.o 00:03:03.471 TEST_HEADER include/spdk/notify.h 00:03:03.471 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:03.471 TEST_HEADER include/spdk/nvme_intel.h 00:03:03.471 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:03.471 TEST_HEADER include/spdk/nvme_spec.h 00:03:03.471 TEST_HEADER include/spdk/nvme_zns.h 00:03:03.471 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:03.471 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:03.471 TEST_HEADER include/spdk/nvmf.h 00:03:03.471 TEST_HEADER include/spdk/nvmf_spec.h 00:03:03.471 TEST_HEADER include/spdk/nvmf_transport.h 00:03:03.471 TEST_HEADER include/spdk/opal.h 00:03:03.471 TEST_HEADER include/spdk/opal_spec.h 00:03:03.471 TEST_HEADER include/spdk/pci_ids.h 00:03:03.471 TEST_HEADER include/spdk/queue.h 00:03:03.471 TEST_HEADER include/spdk/pipe.h 00:03:03.471 TEST_HEADER include/spdk/reduce.h 00:03:03.471 TEST_HEADER include/spdk/rpc.h 00:03:03.471 TEST_HEADER include/spdk/scsi_spec.h 00:03:03.471 TEST_HEADER include/spdk/scsi.h 00:03:03.471 TEST_HEADER include/spdk/sock.h 00:03:03.471 TEST_HEADER include/spdk/scheduler.h 00:03:03.471 TEST_HEADER include/spdk/stdinc.h 00:03:03.471 TEST_HEADER include/spdk/string.h 00:03:03.471 TEST_HEADER include/spdk/thread.h 00:03:03.471 TEST_HEADER include/spdk/trace.h 00:03:03.471 TEST_HEADER include/spdk/trace_parser.h 00:03:03.471 TEST_HEADER include/spdk/tree.h 00:03:03.471 TEST_HEADER include/spdk/ublk.h 00:03:03.471 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.471 TEST_HEADER include/spdk/uuid.h 00:03:03.471 TEST_HEADER include/spdk/util.h 00:03:03.471 TEST_HEADER include/spdk/version.h 00:03:03.471 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:03.471 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:03.471 TEST_HEADER include/spdk/vmd.h 00:03:03.471 TEST_HEADER include/spdk/vhost.h 00:03:03.471 TEST_HEADER include/spdk/xor.h 00:03:03.471 CC app/spdk_dd/spdk_dd.o 00:03:03.471 TEST_HEADER include/spdk/zipf.h 00:03:03.471 CXX test/cpp_headers/accel.o 00:03:03.471 CXX test/cpp_headers/accel_module.o 00:03:03.471 CXX test/cpp_headers/base64.o 00:03:03.471 CXX test/cpp_headers/assert.o 00:03:03.471 CXX test/cpp_headers/barrier.o 00:03:03.471 CXX test/cpp_headers/bdev_zone.o 00:03:03.471 CXX test/cpp_headers/bdev.o 00:03:03.471 CXX test/cpp_headers/bit_pool.o 00:03:03.471 CXX test/cpp_headers/bdev_module.o 00:03:03.471 CXX test/cpp_headers/blob_bdev.o 00:03:03.471 CXX test/cpp_headers/blobfs_bdev.o 00:03:03.471 CXX test/cpp_headers/bit_array.o 00:03:03.471 CXX test/cpp_headers/blob.o 00:03:03.471 CC app/iscsi_tgt/iscsi_tgt.o 00:03:03.471 CXX test/cpp_headers/blobfs.o 00:03:03.471 CXX test/cpp_headers/conf.o 00:03:03.471 CXX test/cpp_headers/crc16.o 00:03:03.471 CXX test/cpp_headers/cpuset.o 00:03:03.471 CXX test/cpp_headers/config.o 00:03:03.471 CXX test/cpp_headers/crc32.o 00:03:03.471 CXX test/cpp_headers/dma.o 00:03:03.471 CXX test/cpp_headers/dif.o 00:03:03.471 CXX test/cpp_headers/env_dpdk.o 00:03:03.471 CXX test/cpp_headers/endian.o 00:03:03.471 CXX test/cpp_headers/crc64.o 00:03:03.471 CXX test/cpp_headers/env.o 00:03:03.471 CXX test/cpp_headers/fd.o 00:03:03.471 CXX test/cpp_headers/file.o 00:03:03.471 CXX test/cpp_headers/event.o 00:03:03.471 CXX test/cpp_headers/fd_group.o 00:03:03.471 CXX test/cpp_headers/fsdev.o 00:03:03.471 CXX test/cpp_headers/fsdev_module.o 00:03:03.471 CXX test/cpp_headers/gpt_spec.o 00:03:03.471 CXX test/cpp_headers/fuse_dispatcher.o 00:03:03.471 CXX test/cpp_headers/ftl.o 00:03:03.471 CXX test/cpp_headers/histogram_data.o 00:03:03.471 CXX test/cpp_headers/idxd.o 00:03:03.471 CXX test/cpp_headers/idxd_spec.o 00:03:03.471 CXX test/cpp_headers/hexlify.o 00:03:03.471 CC app/nvmf_tgt/nvmf_main.o 00:03:03.471 CXX test/cpp_headers/init.o 00:03:03.471 CXX test/cpp_headers/ioat.o 00:03:03.471 CXX test/cpp_headers/iscsi_spec.o 00:03:03.471 CXX test/cpp_headers/json.o 00:03:03.471 CXX test/cpp_headers/ioat_spec.o 00:03:03.471 CXX test/cpp_headers/jsonrpc.o 00:03:03.471 CXX test/cpp_headers/keyring_module.o 00:03:03.471 CXX test/cpp_headers/likely.o 00:03:03.471 CXX test/cpp_headers/keyring.o 00:03:03.471 CXX test/cpp_headers/log.o 00:03:03.471 CXX test/cpp_headers/md5.o 00:03:03.471 CXX test/cpp_headers/mmio.o 00:03:03.471 CXX test/cpp_headers/lvol.o 00:03:03.471 CXX test/cpp_headers/memory.o 00:03:03.471 CXX test/cpp_headers/nbd.o 00:03:03.471 CXX test/cpp_headers/net.o 00:03:03.471 CXX test/cpp_headers/notify.o 00:03:03.471 CXX test/cpp_headers/nvme_intel.o 00:03:03.471 CXX test/cpp_headers/nvme.o 00:03:03.471 CXX test/cpp_headers/nvme_ocssd.o 00:03:03.471 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:03.471 CXX test/cpp_headers/nvme_zns.o 00:03:03.471 CXX test/cpp_headers/nvme_spec.o 00:03:03.471 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:03.471 CXX test/cpp_headers/nvmf_cmd.o 00:03:03.471 CXX test/cpp_headers/nvmf.o 00:03:03.471 CC test/env/memory/memory_ut.o 00:03:03.471 CXX test/cpp_headers/nvmf_spec.o 00:03:03.471 CXX test/cpp_headers/nvmf_transport.o 00:03:03.471 CC app/spdk_tgt/spdk_tgt.o 00:03:03.471 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:03.471 CC test/thread/poller_perf/poller_perf.o 00:03:03.471 CC test/env/vtophys/vtophys.o 00:03:03.471 CC test/app/histogram_perf/histogram_perf.o 00:03:03.471 CC test/env/pci/pci_ut.o 00:03:03.471 CC test/app/stub/stub.o 00:03:03.471 CXX test/cpp_headers/opal.o 00:03:03.471 CC test/dma/test_dma/test_dma.o 00:03:03.471 CXX test/cpp_headers/opal_spec.o 00:03:03.471 CC test/app/jsoncat/jsoncat.o 00:03:03.471 CC examples/ioat/perf/perf.o 00:03:03.471 CC examples/ioat/verify/verify.o 00:03:03.471 CC test/app/bdev_svc/bdev_svc.o 00:03:03.471 CC app/fio/nvme/fio_plugin.o 00:03:03.743 CC examples/util/zipf/zipf.o 00:03:03.743 LINK spdk_lspci 00:03:03.743 CC app/fio/bdev/fio_plugin.o 00:03:03.743 LINK rpc_client_test 00:03:04.003 CC test/env/mem_callbacks/mem_callbacks.o 00:03:04.003 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:04.003 LINK interrupt_tgt 00:03:04.003 LINK histogram_perf 00:03:04.003 LINK jsoncat 00:03:04.003 CXX test/cpp_headers/pci_ids.o 00:03:04.003 CXX test/cpp_headers/pipe.o 00:03:04.003 CXX test/cpp_headers/queue.o 00:03:04.003 CXX test/cpp_headers/rpc.o 00:03:04.003 CXX test/cpp_headers/reduce.o 00:03:04.003 CXX test/cpp_headers/scheduler.o 00:03:04.003 LINK stub 00:03:04.003 CXX test/cpp_headers/scsi.o 00:03:04.003 CXX test/cpp_headers/scsi_spec.o 00:03:04.003 CXX test/cpp_headers/sock.o 00:03:04.003 CXX test/cpp_headers/stdinc.o 00:03:04.003 CXX test/cpp_headers/string.o 00:03:04.003 CXX test/cpp_headers/thread.o 00:03:04.003 CXX test/cpp_headers/trace.o 00:03:04.003 CXX test/cpp_headers/trace_parser.o 00:03:04.003 CXX test/cpp_headers/tree.o 00:03:04.003 CXX test/cpp_headers/ublk.o 00:03:04.003 CXX test/cpp_headers/util.o 00:03:04.003 CXX test/cpp_headers/uuid.o 00:03:04.003 CXX test/cpp_headers/version.o 00:03:04.003 LINK spdk_nvme_discover 00:03:04.003 CXX test/cpp_headers/vfio_user_pci.o 00:03:04.003 CXX test/cpp_headers/vfio_user_spec.o 00:03:04.003 CXX test/cpp_headers/vhost.o 00:03:04.003 LINK zipf 00:03:04.003 CXX test/cpp_headers/vmd.o 00:03:04.003 CXX test/cpp_headers/xor.o 00:03:04.003 LINK vtophys 00:03:04.003 CXX test/cpp_headers/zipf.o 00:03:04.003 LINK poller_perf 00:03:04.003 LINK spdk_tgt 00:03:04.261 LINK nvmf_tgt 00:03:04.261 LINK env_dpdk_post_init 00:03:04.261 LINK iscsi_tgt 00:03:04.261 LINK verify 00:03:04.261 LINK bdev_svc 00:03:04.261 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:04.261 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:04.261 LINK spdk_trace_record 00:03:04.261 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:04.261 LINK ioat_perf 00:03:04.518 LINK spdk_dd 00:03:04.518 LINK pci_ut 00:03:04.518 LINK spdk_trace 00:03:04.518 LINK test_dma 00:03:04.518 CC examples/sock/hello_world/hello_sock.o 00:03:04.518 CC examples/idxd/perf/perf.o 00:03:04.518 CC examples/vmd/led/led.o 00:03:04.518 CC examples/vmd/lsvmd/lsvmd.o 00:03:04.518 LINK nvme_fuzz 00:03:04.518 LINK mem_callbacks 00:03:04.776 LINK spdk_bdev 00:03:04.776 CC examples/thread/thread/thread_ex.o 00:03:04.776 CC test/event/reactor_perf/reactor_perf.o 00:03:04.776 CC test/event/reactor/reactor.o 00:03:04.776 CC test/event/event_perf/event_perf.o 00:03:04.776 CC test/event/app_repeat/app_repeat.o 00:03:04.776 CC test/event/scheduler/scheduler.o 00:03:04.776 LINK lsvmd 00:03:04.776 LINK spdk_nvme_perf 00:03:04.776 LINK led 00:03:04.776 LINK vhost_fuzz 00:03:04.776 LINK reactor_perf 00:03:04.776 LINK spdk_nvme 00:03:04.776 LINK reactor 00:03:04.776 CC app/vhost/vhost.o 00:03:04.776 LINK event_perf 00:03:04.776 LINK hello_sock 00:03:05.034 LINK app_repeat 00:03:05.034 LINK thread 00:03:05.034 LINK spdk_top 00:03:05.034 LINK idxd_perf 00:03:05.034 LINK scheduler 00:03:05.034 LINK spdk_nvme_identify 00:03:05.034 CC test/nvme/e2edp/nvme_dp.o 00:03:05.034 CC test/nvme/sgl/sgl.o 00:03:05.034 CC test/nvme/overhead/overhead.o 00:03:05.034 CC test/nvme/err_injection/err_injection.o 00:03:05.034 CC test/nvme/connect_stress/connect_stress.o 00:03:05.034 CC test/nvme/startup/startup.o 00:03:05.034 CC test/nvme/boot_partition/boot_partition.o 00:03:05.034 CC test/nvme/reserve/reserve.o 00:03:05.034 CC test/nvme/compliance/nvme_compliance.o 00:03:05.034 CC test/nvme/reset/reset.o 00:03:05.034 CC test/nvme/simple_copy/simple_copy.o 00:03:05.034 CC test/nvme/aer/aer.o 00:03:05.034 CC test/nvme/fdp/fdp.o 00:03:05.034 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.034 CC test/nvme/cuse/cuse.o 00:03:05.034 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.034 CC test/accel/dif/dif.o 00:03:05.034 LINK vhost 00:03:05.034 CC test/blobfs/mkfs/mkfs.o 00:03:05.034 CC test/lvol/esnap/esnap.o 00:03:05.292 LINK boot_partition 00:03:05.292 LINK startup 00:03:05.292 LINK memory_ut 00:03:05.292 LINK connect_stress 00:03:05.292 LINK err_injection 00:03:05.292 LINK doorbell_aers 00:03:05.292 LINK reserve 00:03:05.292 CC examples/nvme/reconnect/reconnect.o 00:03:05.292 LINK fused_ordering 00:03:05.292 LINK mkfs 00:03:05.292 CC examples/nvme/abort/abort.o 00:03:05.292 CC examples/nvme/hotplug/hotplug.o 00:03:05.292 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:05.292 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:05.292 CC examples/nvme/hello_world/hello_world.o 00:03:05.292 CC examples/nvme/arbitration/arbitration.o 00:03:05.292 LINK nvme_dp 00:03:05.292 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:05.292 LINK sgl 00:03:05.292 LINK simple_copy 00:03:05.292 LINK reset 00:03:05.292 LINK overhead 00:03:05.292 LINK aer 00:03:05.550 CC examples/accel/perf/accel_perf.o 00:03:05.550 LINK nvme_compliance 00:03:05.550 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:05.550 LINK fdp 00:03:05.550 CC examples/blob/cli/blobcli.o 00:03:05.550 CC examples/blob/hello_world/hello_blob.o 00:03:05.550 LINK pmr_persistence 00:03:05.550 LINK cmb_copy 00:03:05.550 LINK hotplug 00:03:05.550 LINK hello_world 00:03:05.809 LINK reconnect 00:03:05.809 LINK arbitration 00:03:05.809 LINK abort 00:03:05.809 LINK hello_blob 00:03:05.809 LINK hello_fsdev 00:03:05.809 LINK nvme_manage 00:03:05.809 LINK dif 00:03:06.067 LINK accel_perf 00:03:06.067 LINK blobcli 00:03:06.067 LINK iscsi_fuzz 00:03:06.325 LINK cuse 00:03:06.325 CC test/bdev/bdevio/bdevio.o 00:03:06.583 CC examples/bdev/hello_world/hello_bdev.o 00:03:06.583 CC examples/bdev/bdevperf/bdevperf.o 00:03:06.583 LINK hello_bdev 00:03:06.841 LINK bdevio 00:03:07.100 LINK bdevperf 00:03:07.668 CC examples/nvmf/nvmf/nvmf.o 00:03:07.926 LINK nvmf 00:03:09.847 LINK esnap 00:03:10.431 00:03:10.431 real 1m14.828s 00:03:10.431 user 16m50.659s 00:03:10.431 sys 4m28.917s 00:03:10.431 00:10:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:10.431 00:10:40 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.431 ************************************ 00:03:10.431 END TEST make 00:03:10.431 ************************************ 00:03:10.431 00:10:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.431 00:10:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.431 00:10:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.431 00:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.431 00:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.431 00:10:40 -- pm/common@44 -- $ pid=1909595 00:03:10.431 00:10:40 -- pm/common@50 -- $ kill -TERM 1909595 00:03:10.431 00:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.431 00:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.431 00:10:40 -- pm/common@44 -- $ pid=1909596 00:03:10.431 00:10:40 -- pm/common@50 -- $ kill -TERM 1909596 00:03:10.431 00:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.431 00:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.432 00:10:40 -- pm/common@44 -- $ pid=1909598 00:03:10.432 00:10:40 -- pm/common@50 -- $ kill -TERM 1909598 00:03:10.432 00:10:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.432 00:10:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.432 00:10:40 -- pm/common@44 -- $ pid=1909623 00:03:10.432 00:10:40 -- pm/common@50 -- $ sudo -E kill -TERM 1909623 00:03:10.432 00:10:40 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:10.432 00:10:40 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:10.432 00:10:40 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:10.432 00:10:40 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:10.432 00:10:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.432 00:10:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.432 00:10:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.432 00:10:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.432 00:10:40 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.432 00:10:40 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.432 00:10:40 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.432 00:10:40 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.432 00:10:40 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.432 00:10:40 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.432 00:10:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.432 00:10:40 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.432 00:10:40 -- scripts/common.sh@345 -- # : 1 00:03:10.432 00:10:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.432 00:10:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.432 00:10:40 -- scripts/common.sh@365 -- # decimal 1 00:03:10.432 00:10:40 -- scripts/common.sh@353 -- # local d=1 00:03:10.432 00:10:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.432 00:10:40 -- scripts/common.sh@355 -- # echo 1 00:03:10.432 00:10:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.432 00:10:40 -- scripts/common.sh@366 -- # decimal 2 00:03:10.432 00:10:40 -- scripts/common.sh@353 -- # local d=2 00:03:10.432 00:10:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.432 00:10:40 -- scripts/common.sh@355 -- # echo 2 00:03:10.432 00:10:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.432 00:10:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.432 00:10:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.432 00:10:40 -- scripts/common.sh@368 -- # return 0 00:03:10.432 00:10:40 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.432 00:10:40 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:10.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.433 --rc genhtml_branch_coverage=1 00:03:10.433 --rc genhtml_function_coverage=1 00:03:10.433 --rc genhtml_legend=1 00:03:10.433 --rc geninfo_all_blocks=1 00:03:10.433 --rc geninfo_unexecuted_blocks=1 00:03:10.433 00:03:10.433 ' 00:03:10.433 00:10:40 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:10.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.433 --rc genhtml_branch_coverage=1 00:03:10.433 --rc genhtml_function_coverage=1 00:03:10.433 --rc genhtml_legend=1 00:03:10.433 --rc geninfo_all_blocks=1 00:03:10.433 --rc geninfo_unexecuted_blocks=1 00:03:10.433 00:03:10.433 ' 00:03:10.433 00:10:40 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:10.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.433 --rc genhtml_branch_coverage=1 00:03:10.433 --rc genhtml_function_coverage=1 00:03:10.433 --rc genhtml_legend=1 00:03:10.433 --rc geninfo_all_blocks=1 00:03:10.433 --rc geninfo_unexecuted_blocks=1 00:03:10.433 00:03:10.433 ' 00:03:10.433 00:10:40 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:10.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.433 --rc genhtml_branch_coverage=1 00:03:10.433 --rc genhtml_function_coverage=1 00:03:10.433 --rc genhtml_legend=1 00:03:10.433 --rc geninfo_all_blocks=1 00:03:10.433 --rc geninfo_unexecuted_blocks=1 00:03:10.433 00:03:10.433 ' 00:03:10.434 00:10:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.434 00:10:40 -- nvmf/common.sh@7 -- # uname -s 00:03:10.434 00:10:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.434 00:10:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.434 00:10:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.434 00:10:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.434 00:10:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.434 00:10:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.434 00:10:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.434 00:10:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.434 00:10:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.434 00:10:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.434 00:10:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:10.434 00:10:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:10.434 00:10:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.434 00:10:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.434 00:10:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:10.434 00:10:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.435 00:10:41 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh 00:03:10.435 00:10:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.435 00:10:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.436 00:10:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.436 00:10:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.436 00:10:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.436 00:10:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.436 00:10:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.436 00:10:41 -- paths/export.sh@5 -- # export PATH 00:03:10.436 00:10:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.436 00:10:41 -- nvmf/common.sh@51 -- # : 0 00:03:10.436 00:10:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:10.436 00:10:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:10.436 00:10:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.436 00:10:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.436 00:10:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.436 00:10:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:10.436 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:10.436 00:10:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:10.436 00:10:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:10.436 00:10:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:10.436 00:10:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.436 00:10:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.436 00:10:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.436 00:10:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.436 00:10:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps 00:03:10.437 00:10:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.437 00:10:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/coredumps 00:03:10.437 00:10:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.437 00:10:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.437 00:10:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.437 00:10:41 -- spdk/autotest.sh@48 -- # udevadm_pid=1978853 00:03:10.437 00:10:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.437 00:10:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.437 00:10:41 -- pm/common@17 -- # local monitor 00:03:10.437 00:10:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.437 00:10:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.437 00:10:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.437 00:10:41 -- pm/common@21 -- # date +%s 00:03:10.437 00:10:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.437 00:10:41 -- pm/common@21 -- # date +%s 00:03:10.437 00:10:41 -- pm/common@21 -- # date +%s 00:03:10.437 00:10:41 -- pm/common@25 -- # sleep 1 00:03:10.437 00:10:41 -- pm/common@21 -- # date +%s 00:03:10.437 00:10:41 -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425441 00:03:10.437 00:10:41 -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425441 00:03:10.437 00:10:41 -- pm/common@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425441 00:03:10.437 00:10:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728425441 00:03:10.701 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425441_collect-cpu-temp.pm.log 00:03:10.701 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425441_collect-cpu-load.pm.log 00:03:10.702 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425441_collect-vmstat.pm.log 00:03:10.702 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728425441_collect-bmc-pm.bmc.pm.log 00:03:11.639 00:10:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.639 00:10:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.639 00:10:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:11.639 00:10:42 -- common/autotest_common.sh@10 -- # set +x 00:03:11.639 00:10:42 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.639 00:10:42 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:11.639 00:10:42 -- common/autotest_common.sh@10 -- # set +x 00:03:11.639 00:10:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/autotest.sh 00:03:11.639 00:10:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:03:11.639 00:10:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:03:11.639 00:10:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output 00:03:11.639 00:10:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:03:11.639 00:10:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.639 00:10:42 -- common/autotest_common.sh@1455 -- # uname 00:03:11.639 00:10:42 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:11.639 00:10:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.639 00:10:42 -- common/autotest_common.sh@1475 -- # uname 00:03:11.639 00:10:42 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:11.639 00:10:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:11.639 00:10:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:11.639 lcov: LCOV version 1.15 00:03:11.639 00:10:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info 00:03:29.831 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:29.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:37.947 00:11:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:37.947 00:11:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:37.947 00:11:07 -- common/autotest_common.sh@10 -- # set +x 00:03:37.947 00:11:07 -- spdk/autotest.sh@78 -- # rm -f 00:03:37.947 00:11:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.881 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:38.881 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:38.881 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.138 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.138 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.138 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.138 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.138 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.139 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.139 00:11:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:39.139 00:11:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:39.139 00:11:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:39.139 00:11:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:39.139 00:11:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:39.139 00:11:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:39.139 00:11:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:39.139 00:11:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.139 00:11:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:39.139 00:11:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:39.139 00:11:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.139 00:11:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:39.139 00:11:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:39.139 00:11:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:39.139 00:11:09 -- scripts/common.sh@390 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:39.139 No valid GPT data, bailing 00:03:39.139 00:11:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.397 00:11:09 -- scripts/common.sh@394 -- # pt= 00:03:39.397 00:11:09 -- scripts/common.sh@395 -- # return 1 00:03:39.397 00:11:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:39.397 1+0 records in 00:03:39.397 1+0 records out 00:03:39.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543728 s, 193 MB/s 00:03:39.397 00:11:09 -- spdk/autotest.sh@105 -- # sync 00:03:39.397 00:11:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.397 00:11:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.397 00:11:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.672 00:11:15 -- spdk/autotest.sh@111 -- # uname -s 00:03:44.672 00:11:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:44.672 00:11:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:44.672 00:11:15 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh status 00:03:47.203 Hugepages 00:03:47.203 node hugesize free / total 00:03:47.203 node0 1048576kB 0 / 0 00:03:47.203 node0 2048kB 0 / 0 00:03:47.203 node1 1048576kB 0 / 0 00:03:47.203 node1 2048kB 0 / 0 00:03:47.203 00:03:47.203 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.203 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:47.203 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:47.203 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:47.203 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:47.203 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:47.203 00:11:17 -- spdk/autotest.sh@117 -- # uname -s 00:03:47.203 00:11:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:47.203 00:11:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:47.203 00:11:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh 00:03:49.736 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:49.736 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:49.994 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:49.994 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:49.994 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:49.994 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:49.994 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:49.994 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:50.931 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.931 00:11:21 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:51.866 00:11:22 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:51.866 00:11:22 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:51.866 00:11:22 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.866 00:11:22 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:51.866 00:11:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:51.866 00:11:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:51.866 00:11:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.866 00:11:22 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.866 00:11:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:51.866 00:11:22 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:51.866 00:11:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:51.866 00:11:22 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.413 Waiting for block devices as requested 00:03:54.413 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:54.413 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:54.413 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:54.413 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:54.413 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:54.413 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:54.672 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:54.672 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:54.672 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:54.672 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:54.929 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:54.929 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:54.929 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:55.187 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:55.187 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:55.187 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:55.445 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:55.445 00:11:25 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:55.445 00:11:25 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:55.445 00:11:25 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:55.445 00:11:25 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:55.445 00:11:25 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:55.445 00:11:25 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:55.445 00:11:25 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:03:55.445 00:11:25 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:55.445 00:11:25 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:55.445 00:11:25 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:55.445 00:11:25 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:55.445 00:11:25 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:55.445 00:11:25 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:55.445 00:11:25 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:55.445 00:11:25 -- common/autotest_common.sh@1541 -- # continue 00:03:55.445 00:11:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:55.445 00:11:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.445 00:11:25 -- common/autotest_common.sh@10 -- # set +x 00:03:55.446 00:11:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:55.446 00:11:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.446 00:11:26 -- common/autotest_common.sh@10 -- # set +x 00:03:55.446 00:11:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh 00:03:57.975 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.233 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:59.168 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.168 00:11:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:59.168 00:11:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.168 00:11:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.168 00:11:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:59.168 00:11:29 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:59.168 00:11:29 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:59.168 00:11:29 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:59.168 00:11:29 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:59.168 00:11:29 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:59.168 00:11:29 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:59.168 00:11:29 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:59.168 00:11:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:59.168 00:11:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:59.168 00:11:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.168 00:11:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:59.168 00:11:29 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:59.168 00:11:29 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:59.168 00:11:29 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:59.168 00:11:29 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:59.426 00:11:29 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:59.426 00:11:29 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:59.426 00:11:29 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:59.426 00:11:29 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:59.426 00:11:29 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:59.426 00:11:29 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:59.426 00:11:29 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:59.426 00:11:29 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1992940 00:03:59.426 00:11:29 -- common/autotest_common.sh@1583 -- # waitforlisten 1992940 00:03:59.426 00:11:29 -- common/autotest_common.sh@831 -- # '[' -z 1992940 ']' 00:03:59.426 00:11:29 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.426 00:11:29 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:59.426 00:11:29 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.426 00:11:29 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:03:59.426 00:11:29 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:59.426 00:11:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.426 [2024-10-09 00:11:29.900858] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:03:59.426 [2024-10-09 00:11:29.900963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992940 ] 00:03:59.426 [2024-10-09 00:11:30.006189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.684 [2024-10-09 00:11:30.211240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.619 00:11:30 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:00.619 00:11:30 -- common/autotest_common.sh@864 -- # return 0 00:04:00.619 00:11:30 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:00.619 00:11:30 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:00.619 00:11:30 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:03.898 nvme0n1 00:04:03.898 00:11:34 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:03.898 [2024-10-09 00:11:34.218801] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:03.898 [2024-10-09 00:11:34.218844] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:03.898 request: 00:04:03.898 { 00:04:03.898 "nvme_ctrlr_name": "nvme0", 00:04:03.898 "password": "test", 00:04:03.898 "method": "bdev_nvme_opal_revert", 00:04:03.898 "req_id": 1 00:04:03.898 } 00:04:03.898 Got JSON-RPC error response 00:04:03.898 response: 00:04:03.898 { 00:04:03.898 "code": -32603, 00:04:03.898 "message": "Internal error" 00:04:03.898 } 00:04:03.898 00:11:34 -- common/autotest_common.sh@1589 -- # true 00:04:03.898 00:11:34 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:03.898 00:11:34 -- common/autotest_common.sh@1593 -- # killprocess 1992940 00:04:03.898 00:11:34 -- common/autotest_common.sh@950 -- # '[' -z 1992940 ']' 00:04:03.898 00:11:34 -- common/autotest_common.sh@954 -- # kill -0 1992940 00:04:03.898 00:11:34 -- common/autotest_common.sh@955 -- # uname 00:04:03.899 00:11:34 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:03.899 00:11:34 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1992940 00:04:03.899 00:11:34 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:03.899 00:11:34 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:03.899 00:11:34 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1992940' 00:04:03.899 killing process with pid 1992940 00:04:03.899 00:11:34 -- common/autotest_common.sh@969 -- # kill 1992940 00:04:03.899 00:11:34 -- common/autotest_common.sh@974 -- # wait 1992940 00:04:08.096 00:11:37 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.096 00:11:37 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.096 00:11:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.096 00:11:37 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.096 00:11:37 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.096 00:11:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.096 00:11:37 -- common/autotest_common.sh@10 -- # set +x 00:04:08.096 00:11:37 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.096 00:11:37 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh 00:04:08.096 00:11:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.096 00:11:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.096 00:11:37 -- common/autotest_common.sh@10 -- # set +x 00:04:08.096 ************************************ 00:04:08.096 START TEST env 00:04:08.096 ************************************ 00:04:08.096 00:11:37 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env.sh 00:04:08.096 * Looking for test storage... 00:04:08.096 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:08.096 00:11:38 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.096 00:11:38 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.096 00:11:38 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.096 00:11:38 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.096 00:11:38 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.096 00:11:38 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.096 00:11:38 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.096 00:11:38 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.096 00:11:38 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.096 00:11:38 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.096 00:11:38 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.096 00:11:38 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.096 00:11:38 env -- scripts/common.sh@345 -- # : 1 00:04:08.096 00:11:38 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.096 00:11:38 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.096 00:11:38 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.096 00:11:38 env -- scripts/common.sh@353 -- # local d=1 00:04:08.096 00:11:38 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.096 00:11:38 env -- scripts/common.sh@355 -- # echo 1 00:04:08.096 00:11:38 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.096 00:11:38 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.096 00:11:38 env -- scripts/common.sh@353 -- # local d=2 00:04:08.096 00:11:38 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.096 00:11:38 env -- scripts/common.sh@355 -- # echo 2 00:04:08.096 00:11:38 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.096 00:11:38 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.096 00:11:38 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.096 00:11:38 env -- scripts/common.sh@368 -- # return 0 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.096 --rc genhtml_branch_coverage=1 00:04:08.096 --rc genhtml_function_coverage=1 00:04:08.096 --rc genhtml_legend=1 00:04:08.096 --rc geninfo_all_blocks=1 00:04:08.096 --rc geninfo_unexecuted_blocks=1 00:04:08.096 00:04:08.096 ' 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.096 --rc genhtml_branch_coverage=1 00:04:08.096 --rc genhtml_function_coverage=1 00:04:08.096 --rc genhtml_legend=1 00:04:08.096 --rc geninfo_all_blocks=1 00:04:08.096 --rc geninfo_unexecuted_blocks=1 00:04:08.096 00:04:08.096 ' 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:08.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.096 --rc genhtml_branch_coverage=1 00:04:08.096 --rc genhtml_function_coverage=1 00:04:08.096 --rc genhtml_legend=1 00:04:08.096 --rc geninfo_all_blocks=1 00:04:08.096 --rc geninfo_unexecuted_blocks=1 00:04:08.096 00:04:08.096 ' 00:04:08.096 00:11:38 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:08.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.097 --rc genhtml_branch_coverage=1 00:04:08.097 --rc genhtml_function_coverage=1 00:04:08.097 --rc genhtml_legend=1 00:04:08.097 --rc geninfo_all_blocks=1 00:04:08.097 --rc geninfo_unexecuted_blocks=1 00:04:08.097 00:04:08.097 ' 00:04:08.097 00:11:38 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut 00:04:08.097 00:11:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.097 00:11:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.097 00:11:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.097 ************************************ 00:04:08.097 START TEST env_memory 00:04:08.097 ************************************ 00:04:08.097 00:11:38 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/memory/memory_ut 00:04:08.097 00:04:08.097 00:04:08.097 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.097 http://cunit.sourceforge.net/ 00:04:08.097 00:04:08.097 00:04:08.097 Suite: memory 00:04:08.097 Test: alloc and free memory map ...[2024-10-09 00:11:38.222186] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.097 passed 00:04:08.097 Test: mem map translation ...[2024-10-09 00:11:38.261858] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.097 [2024-10-09 00:11:38.261881] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.097 [2024-10-09 00:11:38.261928] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.097 [2024-10-09 00:11:38.261940] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.097 passed 00:04:08.097 Test: mem map registration ...[2024-10-09 00:11:38.323681] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.097 [2024-10-09 00:11:38.323704] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.097 passed 00:04:08.097 Test: mem map adjacent registrations ...passed 00:04:08.097 00:04:08.097 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.097 suites 1 1 n/a 0 0 00:04:08.097 tests 4 4 4 0 0 00:04:08.097 asserts 152 152 152 0 n/a 00:04:08.097 00:04:08.097 Elapsed time = 0.228 seconds 00:04:08.097 00:04:08.097 real 0m0.263s 00:04:08.097 user 0m0.244s 00:04:08.097 sys 0m0.017s 00:04:08.097 00:11:38 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.097 00:11:38 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.097 ************************************ 00:04:08.097 END TEST env_memory 00:04:08.097 ************************************ 00:04:08.097 00:11:38 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:08.097 00:11:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.097 00:11:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.097 00:11:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.097 ************************************ 00:04:08.097 START TEST env_vtophys 00:04:08.097 ************************************ 00:04:08.097 00:11:38 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:08.097 EAL: lib.eal log level changed from notice to debug 00:04:08.097 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.097 EAL: Detected lcore 1 as core 1 on socket 0 00:04:08.097 EAL: Detected lcore 2 as core 2 on socket 0 00:04:08.097 EAL: Detected lcore 3 as core 3 on socket 0 00:04:08.097 EAL: Detected lcore 4 as core 4 on socket 0 00:04:08.097 EAL: Detected lcore 5 as core 5 on socket 0 00:04:08.097 EAL: Detected lcore 6 as core 6 on socket 0 00:04:08.097 EAL: Detected lcore 7 as core 8 on socket 0 00:04:08.097 EAL: Detected lcore 8 as core 9 on socket 0 00:04:08.097 EAL: Detected lcore 9 as core 10 on socket 0 00:04:08.097 EAL: Detected lcore 10 as core 11 on socket 0 00:04:08.097 EAL: Detected lcore 11 as core 12 on socket 0 00:04:08.097 EAL: Detected lcore 12 as core 13 on socket 0 00:04:08.097 EAL: Detected lcore 13 as core 16 on socket 0 00:04:08.097 EAL: Detected lcore 14 as core 17 on socket 0 00:04:08.097 EAL: Detected lcore 15 as core 18 on socket 0 00:04:08.097 EAL: Detected lcore 16 as core 19 on socket 0 00:04:08.097 EAL: Detected lcore 17 as core 20 on socket 0 00:04:08.097 EAL: Detected lcore 18 as core 21 on socket 0 00:04:08.097 EAL: Detected lcore 19 as core 25 on socket 0 00:04:08.097 EAL: Detected lcore 20 as core 26 on socket 0 00:04:08.097 EAL: Detected lcore 21 as core 27 on socket 0 00:04:08.097 EAL: Detected lcore 22 as core 28 on socket 0 00:04:08.097 EAL: Detected lcore 23 as core 29 on socket 0 00:04:08.097 EAL: Detected lcore 24 as core 0 on socket 1 00:04:08.097 EAL: Detected lcore 25 as core 1 on socket 1 00:04:08.097 EAL: Detected lcore 26 as core 2 on socket 1 00:04:08.097 EAL: Detected lcore 27 as core 3 on socket 1 00:04:08.097 EAL: Detected lcore 28 as core 4 on socket 1 00:04:08.097 EAL: Detected lcore 29 as core 5 on socket 1 00:04:08.097 EAL: Detected lcore 30 as core 6 on socket 1 00:04:08.097 EAL: Detected lcore 31 as core 8 on socket 1 00:04:08.097 EAL: Detected lcore 32 as core 9 on socket 1 00:04:08.097 EAL: Detected lcore 33 as core 10 on socket 1 00:04:08.097 EAL: Detected lcore 34 as core 11 on socket 1 00:04:08.097 EAL: Detected lcore 35 as core 12 on socket 1 00:04:08.097 EAL: Detected lcore 36 as core 13 on socket 1 00:04:08.097 EAL: Detected lcore 37 as core 16 on socket 1 00:04:08.097 EAL: Detected lcore 38 as core 17 on socket 1 00:04:08.097 EAL: Detected lcore 39 as core 18 on socket 1 00:04:08.097 EAL: Detected lcore 40 as core 19 on socket 1 00:04:08.097 EAL: Detected lcore 41 as core 20 on socket 1 00:04:08.097 EAL: Detected lcore 42 as core 21 on socket 1 00:04:08.097 EAL: Detected lcore 43 as core 25 on socket 1 00:04:08.097 EAL: Detected lcore 44 as core 26 on socket 1 00:04:08.097 EAL: Detected lcore 45 as core 27 on socket 1 00:04:08.097 EAL: Detected lcore 46 as core 28 on socket 1 00:04:08.097 EAL: Detected lcore 47 as core 29 on socket 1 00:04:08.097 EAL: Detected lcore 48 as core 0 on socket 0 00:04:08.097 EAL: Detected lcore 49 as core 1 on socket 0 00:04:08.097 EAL: Detected lcore 50 as core 2 on socket 0 00:04:08.097 EAL: Detected lcore 51 as core 3 on socket 0 00:04:08.097 EAL: Detected lcore 52 as core 4 on socket 0 00:04:08.097 EAL: Detected lcore 53 as core 5 on socket 0 00:04:08.097 EAL: Detected lcore 54 as core 6 on socket 0 00:04:08.097 EAL: Detected lcore 55 as core 8 on socket 0 00:04:08.097 EAL: Detected lcore 56 as core 9 on socket 0 00:04:08.097 EAL: Detected lcore 57 as core 10 on socket 0 00:04:08.097 EAL: Detected lcore 58 as core 11 on socket 0 00:04:08.097 EAL: Detected lcore 59 as core 12 on socket 0 00:04:08.097 EAL: Detected lcore 60 as core 13 on socket 0 00:04:08.097 EAL: Detected lcore 61 as core 16 on socket 0 00:04:08.097 EAL: Detected lcore 62 as core 17 on socket 0 00:04:08.097 EAL: Detected lcore 63 as core 18 on socket 0 00:04:08.097 EAL: Detected lcore 64 as core 19 on socket 0 00:04:08.097 EAL: Detected lcore 65 as core 20 on socket 0 00:04:08.097 EAL: Detected lcore 66 as core 21 on socket 0 00:04:08.097 EAL: Detected lcore 67 as core 25 on socket 0 00:04:08.097 EAL: Detected lcore 68 as core 26 on socket 0 00:04:08.097 EAL: Detected lcore 69 as core 27 on socket 0 00:04:08.097 EAL: Detected lcore 70 as core 28 on socket 0 00:04:08.097 EAL: Detected lcore 71 as core 29 on socket 0 00:04:08.097 EAL: Detected lcore 72 as core 0 on socket 1 00:04:08.097 EAL: Detected lcore 73 as core 1 on socket 1 00:04:08.097 EAL: Detected lcore 74 as core 2 on socket 1 00:04:08.097 EAL: Detected lcore 75 as core 3 on socket 1 00:04:08.097 EAL: Detected lcore 76 as core 4 on socket 1 00:04:08.097 EAL: Detected lcore 77 as core 5 on socket 1 00:04:08.097 EAL: Detected lcore 78 as core 6 on socket 1 00:04:08.097 EAL: Detected lcore 79 as core 8 on socket 1 00:04:08.097 EAL: Detected lcore 80 as core 9 on socket 1 00:04:08.097 EAL: Detected lcore 81 as core 10 on socket 1 00:04:08.097 EAL: Detected lcore 82 as core 11 on socket 1 00:04:08.097 EAL: Detected lcore 83 as core 12 on socket 1 00:04:08.097 EAL: Detected lcore 84 as core 13 on socket 1 00:04:08.097 EAL: Detected lcore 85 as core 16 on socket 1 00:04:08.097 EAL: Detected lcore 86 as core 17 on socket 1 00:04:08.097 EAL: Detected lcore 87 as core 18 on socket 1 00:04:08.097 EAL: Detected lcore 88 as core 19 on socket 1 00:04:08.097 EAL: Detected lcore 89 as core 20 on socket 1 00:04:08.097 EAL: Detected lcore 90 as core 21 on socket 1 00:04:08.097 EAL: Detected lcore 91 as core 25 on socket 1 00:04:08.097 EAL: Detected lcore 92 as core 26 on socket 1 00:04:08.097 EAL: Detected lcore 93 as core 27 on socket 1 00:04:08.097 EAL: Detected lcore 94 as core 28 on socket 1 00:04:08.097 EAL: Detected lcore 95 as core 29 on socket 1 00:04:08.097 EAL: Maximum logical cores by configuration: 128 00:04:08.097 EAL: Detected CPU lcores: 96 00:04:08.097 EAL: Detected NUMA nodes: 2 00:04:08.097 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.097 EAL: Detected shared linkage of DPDK 00:04:08.097 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.097 EAL: No shared files mode enabled, IPC is disabled 00:04:08.097 EAL: Bus pci wants IOVA as 'DC' 00:04:08.097 EAL: Bus auxiliary wants IOVA as 'DC' 00:04:08.097 EAL: Bus vdev wants IOVA as 'DC' 00:04:08.098 EAL: Buses did not request a specific IOVA mode. 00:04:08.098 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:08.098 EAL: Selected IOVA mode 'VA' 00:04:08.098 EAL: Probing VFIO support... 00:04:08.098 EAL: IOMMU type 1 (Type 1) is supported 00:04:08.098 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:08.098 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:08.098 EAL: VFIO support initialized 00:04:08.098 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.098 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.098 EAL: Setting up physically contiguous memory... 00:04:08.098 EAL: Setting maximum number of open files to 524288 00:04:08.098 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.098 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:08.098 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.098 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:08.098 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.098 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:08.098 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.098 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.098 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:08.098 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:08.098 EAL: Hugepages will be freed exactly as allocated. 00:04:08.098 EAL: No shared files mode enabled, IPC is disabled 00:04:08.098 EAL: No shared files mode enabled, IPC is disabled 00:04:08.098 EAL: TSC frequency is ~2100000 KHz 00:04:08.098 EAL: Main lcore 0 is ready (tid=7f2318f29b40;cpuset=[0]) 00:04:08.098 EAL: Trying to obtain current memory policy. 00:04:08.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.098 EAL: Restoring previous memory policy: 0 00:04:08.098 EAL: request: mp_malloc_sync 00:04:08.098 EAL: No shared files mode enabled, IPC is disabled 00:04:08.098 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.098 EAL: No shared files mode enabled, IPC is disabled 00:04:08.098 EAL: No shared files mode enabled, IPC is disabled 00:04:08.098 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.098 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.098 00:04:08.098 00:04:08.098 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.098 http://cunit.sourceforge.net/ 00:04:08.098 00:04:08.098 00:04:08.098 Suite: components_suite 00:04:08.356 Test: vtophys_malloc_test ...passed 00:04:08.356 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:08.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.356 EAL: Restoring previous memory policy: 4 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was expanded by 4MB 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was shrunk by 4MB 00:04:08.356 EAL: Trying to obtain current memory policy. 00:04:08.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.356 EAL: Restoring previous memory policy: 4 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was expanded by 6MB 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was shrunk by 6MB 00:04:08.356 EAL: Trying to obtain current memory policy. 00:04:08.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.356 EAL: Restoring previous memory policy: 4 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was expanded by 10MB 00:04:08.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.356 EAL: request: mp_malloc_sync 00:04:08.356 EAL: No shared files mode enabled, IPC is disabled 00:04:08.356 EAL: Heap on socket 0 was shrunk by 10MB 00:04:08.628 EAL: Trying to obtain current memory policy. 00:04:08.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.628 EAL: Restoring previous memory policy: 4 00:04:08.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.628 EAL: request: mp_malloc_sync 00:04:08.628 EAL: No shared files mode enabled, IPC is disabled 00:04:08.628 EAL: Heap on socket 0 was expanded by 18MB 00:04:08.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.628 EAL: request: mp_malloc_sync 00:04:08.628 EAL: No shared files mode enabled, IPC is disabled 00:04:08.628 EAL: Heap on socket 0 was shrunk by 18MB 00:04:08.628 EAL: Trying to obtain current memory policy. 00:04:08.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.628 EAL: Restoring previous memory policy: 4 00:04:08.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.628 EAL: request: mp_malloc_sync 00:04:08.628 EAL: No shared files mode enabled, IPC is disabled 00:04:08.628 EAL: Heap on socket 0 was expanded by 34MB 00:04:08.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.628 EAL: request: mp_malloc_sync 00:04:08.628 EAL: No shared files mode enabled, IPC is disabled 00:04:08.628 EAL: Heap on socket 0 was shrunk by 34MB 00:04:08.628 EAL: Trying to obtain current memory policy. 00:04:08.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.628 EAL: Restoring previous memory policy: 4 00:04:08.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.628 EAL: request: mp_malloc_sync 00:04:08.628 EAL: No shared files mode enabled, IPC is disabled 00:04:08.628 EAL: Heap on socket 0 was expanded by 66MB 00:04:08.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.890 EAL: request: mp_malloc_sync 00:04:08.890 EAL: No shared files mode enabled, IPC is disabled 00:04:08.890 EAL: Heap on socket 0 was shrunk by 66MB 00:04:08.890 EAL: Trying to obtain current memory policy. 00:04:08.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.890 EAL: Restoring previous memory policy: 4 00:04:08.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.890 EAL: request: mp_malloc_sync 00:04:08.890 EAL: No shared files mode enabled, IPC is disabled 00:04:08.890 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.148 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.148 EAL: request: mp_malloc_sync 00:04:09.149 EAL: No shared files mode enabled, IPC is disabled 00:04:09.149 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.406 EAL: Trying to obtain current memory policy. 00:04:09.406 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.406 EAL: Restoring previous memory policy: 4 00:04:09.406 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.406 EAL: request: mp_malloc_sync 00:04:09.406 EAL: No shared files mode enabled, IPC is disabled 00:04:09.406 EAL: Heap on socket 0 was expanded by 258MB 00:04:09.971 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.971 EAL: request: mp_malloc_sync 00:04:09.971 EAL: No shared files mode enabled, IPC is disabled 00:04:09.971 EAL: Heap on socket 0 was shrunk by 258MB 00:04:10.229 EAL: Trying to obtain current memory policy. 00:04:10.229 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.487 EAL: Restoring previous memory policy: 4 00:04:10.487 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.487 EAL: request: mp_malloc_sync 00:04:10.487 EAL: No shared files mode enabled, IPC is disabled 00:04:10.487 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.421 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.421 EAL: request: mp_malloc_sync 00:04:11.421 EAL: No shared files mode enabled, IPC is disabled 00:04:11.421 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.371 EAL: Trying to obtain current memory policy. 00:04:12.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.371 EAL: Restoring previous memory policy: 4 00:04:12.371 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.371 EAL: request: mp_malloc_sync 00:04:12.371 EAL: No shared files mode enabled, IPC is disabled 00:04:12.371 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.270 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.270 EAL: request: mp_malloc_sync 00:04:14.270 EAL: No shared files mode enabled, IPC is disabled 00:04:14.270 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:16.248 passed 00:04:16.248 00:04:16.248 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.248 suites 1 1 n/a 0 0 00:04:16.248 tests 2 2 2 0 0 00:04:16.248 asserts 497 497 497 0 n/a 00:04:16.248 00:04:16.248 Elapsed time = 7.741 seconds 00:04:16.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.248 EAL: request: mp_malloc_sync 00:04:16.248 EAL: No shared files mode enabled, IPC is disabled 00:04:16.248 EAL: Heap on socket 0 was shrunk by 2MB 00:04:16.248 EAL: No shared files mode enabled, IPC is disabled 00:04:16.248 EAL: No shared files mode enabled, IPC is disabled 00:04:16.248 EAL: No shared files mode enabled, IPC is disabled 00:04:16.248 00:04:16.248 real 0m7.967s 00:04:16.248 user 0m7.173s 00:04:16.248 sys 0m0.749s 00:04:16.248 00:11:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.248 00:11:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:16.248 ************************************ 00:04:16.248 END TEST env_vtophys 00:04:16.248 ************************************ 00:04:16.249 00:11:46 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.249 00:11:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:16.249 00:11:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.249 00:11:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.249 ************************************ 00:04:16.249 START TEST env_pci 00:04:16.249 ************************************ 00:04:16.249 00:11:46 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/pci/pci_ut 00:04:16.249 00:04:16.249 00:04:16.249 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.249 http://cunit.sourceforge.net/ 00:04:16.249 00:04:16.249 00:04:16.249 Suite: pci 00:04:16.249 Test: pci_hook ...[2024-10-09 00:11:46.577790] /var/jenkins/workspace/vfio-user-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1995799 has claimed it 00:04:16.249 EAL: Cannot find device (10000:00:01.0) 00:04:16.249 EAL: Failed to attach device on primary process 00:04:16.249 passed 00:04:16.249 00:04:16.249 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.249 suites 1 1 n/a 0 0 00:04:16.249 tests 1 1 1 0 0 00:04:16.249 asserts 25 25 25 0 n/a 00:04:16.249 00:04:16.249 Elapsed time = 0.044 seconds 00:04:16.249 00:04:16.249 real 0m0.128s 00:04:16.249 user 0m0.056s 00:04:16.249 sys 0m0.072s 00:04:16.249 00:11:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:16.249 00:11:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:16.249 ************************************ 00:04:16.249 END TEST env_pci 00:04:16.249 ************************************ 00:04:16.249 00:11:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:16.249 00:11:46 env -- env/env.sh@15 -- # uname 00:04:16.249 00:11:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:16.249 00:11:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:16.249 00:11:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.249 00:11:46 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:16.249 00:11:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:16.249 00:11:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:16.249 ************************************ 00:04:16.249 START TEST env_dpdk_post_init 00:04:16.249 ************************************ 00:04:16.249 00:11:46 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:16.249 EAL: Detected CPU lcores: 96 00:04:16.249 EAL: Detected NUMA nodes: 2 00:04:16.249 EAL: Detected shared linkage of DPDK 00:04:16.249 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.249 EAL: Selected IOVA mode 'VA' 00:04:16.249 EAL: VFIO support initialized 00:04:16.249 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.506 EAL: Using IOMMU type 1 (Type 1) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:16.506 EAL: Ignore mapping IO port bar(1) 00:04:16.506 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:17.440 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:17.440 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:17.441 EAL: Ignore mapping IO port bar(1) 00:04:17.441 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:20.721 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:20.721 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:20.721 Starting DPDK initialization... 00:04:20.721 Starting SPDK post initialization... 00:04:20.721 SPDK NVMe probe 00:04:20.721 Attaching to 0000:5e:00.0 00:04:20.721 Attached to 0000:5e:00.0 00:04:20.721 Cleaning up... 00:04:20.721 00:04:20.721 real 0m4.484s 00:04:20.721 user 0m3.077s 00:04:20.721 sys 0m0.479s 00:04:20.721 00:11:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.722 00:11:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.722 ************************************ 00:04:20.722 END TEST env_dpdk_post_init 00:04:20.722 ************************************ 00:04:20.722 00:11:51 env -- env/env.sh@26 -- # uname 00:04:20.722 00:11:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.722 00:11:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.722 00:11:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.722 00:11:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.722 00:11:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.722 ************************************ 00:04:20.722 START TEST env_mem_callbacks 00:04:20.722 ************************************ 00:04:20.722 00:11:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.722 EAL: Detected CPU lcores: 96 00:04:20.722 EAL: Detected NUMA nodes: 2 00:04:20.722 EAL: Detected shared linkage of DPDK 00:04:20.979 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.979 EAL: Selected IOVA mode 'VA' 00:04:20.979 EAL: VFIO support initialized 00:04:20.979 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.979 00:04:20.979 00:04:20.979 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.979 http://cunit.sourceforge.net/ 00:04:20.979 00:04:20.979 00:04:20.979 Suite: memory 00:04:20.979 Test: test ... 00:04:20.979 register 0x200000200000 2097152 00:04:20.979 malloc 3145728 00:04:20.979 register 0x200000400000 4194304 00:04:20.979 buf 0x2000004fffc0 len 3145728 PASSED 00:04:20.979 malloc 64 00:04:20.979 buf 0x2000004ffec0 len 64 PASSED 00:04:20.979 malloc 4194304 00:04:20.979 register 0x200000800000 6291456 00:04:20.979 buf 0x2000009fffc0 len 4194304 PASSED 00:04:20.979 free 0x2000004fffc0 3145728 00:04:20.979 free 0x2000004ffec0 64 00:04:20.979 unregister 0x200000400000 4194304 PASSED 00:04:20.979 free 0x2000009fffc0 4194304 00:04:20.979 unregister 0x200000800000 6291456 PASSED 00:04:20.979 malloc 8388608 00:04:20.979 register 0x200000400000 10485760 00:04:20.979 buf 0x2000005fffc0 len 8388608 PASSED 00:04:20.979 free 0x2000005fffc0 8388608 00:04:20.979 unregister 0x200000400000 10485760 PASSED 00:04:20.979 passed 00:04:20.979 00:04:20.979 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.979 suites 1 1 n/a 0 0 00:04:20.979 tests 1 1 1 0 0 00:04:20.979 asserts 15 15 15 0 n/a 00:04:20.979 00:04:20.979 Elapsed time = 0.064 seconds 00:04:20.979 00:04:20.979 real 0m0.176s 00:04:20.979 user 0m0.100s 00:04:20.979 sys 0m0.075s 00:04:20.979 00:11:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.979 00:11:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:20.979 ************************************ 00:04:20.979 END TEST env_mem_callbacks 00:04:20.979 ************************************ 00:04:20.979 00:04:20.979 real 0m13.556s 00:04:20.979 user 0m10.888s 00:04:20.979 sys 0m1.727s 00:04:20.979 00:11:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.979 00:11:51 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.979 ************************************ 00:04:20.979 END TEST env 00:04:20.979 ************************************ 00:04:20.979 00:11:51 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh 00:04:20.979 00:11:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.979 00:11:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.979 00:11:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.979 ************************************ 00:04:20.979 START TEST rpc 00:04:20.979 ************************************ 00:04:20.979 00:11:51 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/rpc.sh 00:04:21.237 * Looking for test storage... 00:04:21.237 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc 00:04:21.237 00:11:51 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:21.237 00:11:51 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:21.237 00:11:51 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:21.237 00:11:51 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:21.237 00:11:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.237 00:11:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.237 00:11:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.237 00:11:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.237 00:11:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.237 00:11:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.237 00:11:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.237 00:11:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.237 00:11:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.237 00:11:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.237 00:11:51 rpc -- scripts/common.sh@345 -- # : 1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.237 00:11:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.237 00:11:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.237 00:11:51 rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.237 00:11:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.238 00:11:51 rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.238 00:11:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.238 00:11:51 rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.238 00:11:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.238 00:11:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.238 00:11:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.238 00:11:51 rpc -- scripts/common.sh@368 -- # return 0 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:21.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.238 --rc genhtml_branch_coverage=1 00:04:21.238 --rc genhtml_function_coverage=1 00:04:21.238 --rc genhtml_legend=1 00:04:21.238 --rc geninfo_all_blocks=1 00:04:21.238 --rc geninfo_unexecuted_blocks=1 00:04:21.238 00:04:21.238 ' 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:21.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.238 --rc genhtml_branch_coverage=1 00:04:21.238 --rc genhtml_function_coverage=1 00:04:21.238 --rc genhtml_legend=1 00:04:21.238 --rc geninfo_all_blocks=1 00:04:21.238 --rc geninfo_unexecuted_blocks=1 00:04:21.238 00:04:21.238 ' 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:21.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.238 --rc genhtml_branch_coverage=1 00:04:21.238 --rc genhtml_function_coverage=1 00:04:21.238 --rc genhtml_legend=1 00:04:21.238 --rc geninfo_all_blocks=1 00:04:21.238 --rc geninfo_unexecuted_blocks=1 00:04:21.238 00:04:21.238 ' 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:21.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.238 --rc genhtml_branch_coverage=1 00:04:21.238 --rc genhtml_function_coverage=1 00:04:21.238 --rc genhtml_legend=1 00:04:21.238 --rc geninfo_all_blocks=1 00:04:21.238 --rc geninfo_unexecuted_blocks=1 00:04:21.238 00:04:21.238 ' 00:04:21.238 00:11:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1996829 00:04:21.238 00:11:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.238 00:11:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1996829 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 1996829 ']' 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.238 00:11:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.238 00:11:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.238 [2024-10-09 00:11:51.842010] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:21.238 [2024-10-09 00:11:51.842115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996829 ] 00:04:21.496 [2024-10-09 00:11:51.943565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.754 [2024-10-09 00:11:52.136930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.754 [2024-10-09 00:11:52.136976] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1996829' to capture a snapshot of events at runtime. 00:04:21.754 [2024-10-09 00:11:52.136988] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.754 [2024-10-09 00:11:52.136997] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.754 [2024-10-09 00:11:52.137006] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1996829 for offline analysis/debug. 00:04:21.754 [2024-10-09 00:11:52.138529] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.688 00:11:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.688 00:11:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:22.688 00:11:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc 00:04:22.688 00:11:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc 00:04:22.688 00:11:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.688 00:11:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.688 00:11:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.688 00:11:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.688 00:11:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 ************************************ 00:04:22.688 START TEST rpc_integrity 00:04:22.688 ************************************ 00:04:22.688 00:11:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:22.688 00:11:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.688 00:11:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.688 00:11:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.688 { 00:04:22.688 "name": "Malloc0", 00:04:22.688 "aliases": [ 00:04:22.688 "9149d875-17ff-409e-8976-65d63e5c413b" 00:04:22.688 ], 00:04:22.688 "product_name": "Malloc disk", 00:04:22.688 "block_size": 512, 00:04:22.688 "num_blocks": 16384, 00:04:22.688 "uuid": "9149d875-17ff-409e-8976-65d63e5c413b", 00:04:22.688 "assigned_rate_limits": { 00:04:22.688 "rw_ios_per_sec": 0, 00:04:22.688 "rw_mbytes_per_sec": 0, 00:04:22.688 "r_mbytes_per_sec": 0, 00:04:22.688 "w_mbytes_per_sec": 0 00:04:22.688 }, 00:04:22.688 "claimed": false, 00:04:22.688 "zoned": false, 00:04:22.688 "supported_io_types": { 00:04:22.688 "read": true, 00:04:22.688 "write": true, 00:04:22.688 "unmap": true, 00:04:22.688 "flush": true, 00:04:22.688 "reset": true, 00:04:22.688 "nvme_admin": false, 00:04:22.688 "nvme_io": false, 00:04:22.688 "nvme_io_md": false, 00:04:22.688 "write_zeroes": true, 00:04:22.688 "zcopy": true, 00:04:22.688 "get_zone_info": false, 00:04:22.688 "zone_management": false, 00:04:22.688 "zone_append": false, 00:04:22.688 "compare": false, 00:04:22.688 "compare_and_write": false, 00:04:22.688 "abort": true, 00:04:22.688 "seek_hole": false, 00:04:22.688 "seek_data": false, 00:04:22.688 "copy": true, 00:04:22.688 "nvme_iov_md": false 00:04:22.688 }, 00:04:22.688 "memory_domains": [ 00:04:22.688 { 00:04:22.688 "dma_device_id": "system", 00:04:22.688 "dma_device_type": 1 00:04:22.688 }, 00:04:22.688 { 00:04:22.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.688 "dma_device_type": 2 00:04:22.688 } 00:04:22.688 ], 00:04:22.688 "driver_specific": {} 00:04:22.688 } 00:04:22.688 ]' 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 [2024-10-09 00:11:53.132066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:22.688 [2024-10-09 00:11:53.132109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.688 [2024-10-09 00:11:53.132132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021f80 00:04:22.688 [2024-10-09 00:11:53.132143] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.688 [2024-10-09 00:11:53.134099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.688 [2024-10-09 00:11:53.134126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.688 Passthru0 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.688 { 00:04:22.688 "name": "Malloc0", 00:04:22.688 "aliases": [ 00:04:22.688 "9149d875-17ff-409e-8976-65d63e5c413b" 00:04:22.688 ], 00:04:22.688 "product_name": "Malloc disk", 00:04:22.688 "block_size": 512, 00:04:22.688 "num_blocks": 16384, 00:04:22.688 "uuid": "9149d875-17ff-409e-8976-65d63e5c413b", 00:04:22.688 "assigned_rate_limits": { 00:04:22.688 "rw_ios_per_sec": 0, 00:04:22.688 "rw_mbytes_per_sec": 0, 00:04:22.688 "r_mbytes_per_sec": 0, 00:04:22.688 "w_mbytes_per_sec": 0 00:04:22.688 }, 00:04:22.688 "claimed": true, 00:04:22.688 "claim_type": "exclusive_write", 00:04:22.688 "zoned": false, 00:04:22.688 "supported_io_types": { 00:04:22.688 "read": true, 00:04:22.688 "write": true, 00:04:22.688 "unmap": true, 00:04:22.688 "flush": true, 00:04:22.688 "reset": true, 00:04:22.688 "nvme_admin": false, 00:04:22.688 "nvme_io": false, 00:04:22.688 "nvme_io_md": false, 00:04:22.688 "write_zeroes": true, 00:04:22.688 "zcopy": true, 00:04:22.688 "get_zone_info": false, 00:04:22.688 "zone_management": false, 00:04:22.688 "zone_append": false, 00:04:22.688 "compare": false, 00:04:22.688 "compare_and_write": false, 00:04:22.688 "abort": true, 00:04:22.688 "seek_hole": false, 00:04:22.688 "seek_data": false, 00:04:22.688 "copy": true, 00:04:22.688 "nvme_iov_md": false 00:04:22.688 }, 00:04:22.688 "memory_domains": [ 00:04:22.688 { 00:04:22.688 "dma_device_id": "system", 00:04:22.688 "dma_device_type": 1 00:04:22.688 }, 00:04:22.688 { 00:04:22.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.688 "dma_device_type": 2 00:04:22.688 } 00:04:22.688 ], 00:04:22.688 "driver_specific": {} 00:04:22.688 }, 00:04:22.688 { 00:04:22.688 "name": "Passthru0", 00:04:22.688 "aliases": [ 00:04:22.688 "aedba027-99f2-5f57-9971-c9553ed49e53" 00:04:22.688 ], 00:04:22.688 "product_name": "passthru", 00:04:22.688 "block_size": 512, 00:04:22.688 "num_blocks": 16384, 00:04:22.688 "uuid": "aedba027-99f2-5f57-9971-c9553ed49e53", 00:04:22.688 "assigned_rate_limits": { 00:04:22.688 "rw_ios_per_sec": 0, 00:04:22.688 "rw_mbytes_per_sec": 0, 00:04:22.688 "r_mbytes_per_sec": 0, 00:04:22.688 "w_mbytes_per_sec": 0 00:04:22.688 }, 00:04:22.688 "claimed": false, 00:04:22.688 "zoned": false, 00:04:22.688 "supported_io_types": { 00:04:22.688 "read": true, 00:04:22.688 "write": true, 00:04:22.688 "unmap": true, 00:04:22.688 "flush": true, 00:04:22.688 "reset": true, 00:04:22.688 "nvme_admin": false, 00:04:22.688 "nvme_io": false, 00:04:22.688 "nvme_io_md": false, 00:04:22.688 "write_zeroes": true, 00:04:22.688 "zcopy": true, 00:04:22.688 "get_zone_info": false, 00:04:22.688 "zone_management": false, 00:04:22.688 "zone_append": false, 00:04:22.688 "compare": false, 00:04:22.688 "compare_and_write": false, 00:04:22.688 "abort": true, 00:04:22.688 "seek_hole": false, 00:04:22.688 "seek_data": false, 00:04:22.688 "copy": true, 00:04:22.688 "nvme_iov_md": false 00:04:22.688 }, 00:04:22.688 "memory_domains": [ 00:04:22.688 { 00:04:22.688 "dma_device_id": "system", 00:04:22.688 "dma_device_type": 1 00:04:22.688 }, 00:04:22.688 { 00:04:22.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.688 "dma_device_type": 2 00:04:22.688 } 00:04:22.688 ], 00:04:22.688 "driver_specific": { 00:04:22.688 "passthru": { 00:04:22.688 "name": "Passthru0", 00:04:22.688 "base_bdev_name": "Malloc0" 00:04:22.688 } 00:04:22.688 } 00:04:22.688 } 00:04:22.688 ]' 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.688 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:22.688 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.689 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.689 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.689 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.689 00:11:53 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.689 00:04:22.689 real 0m0.312s 00:04:22.689 user 0m0.179s 00:04:22.689 sys 0m0.034s 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.689 00:11:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.689 ************************************ 00:04:22.689 END TEST rpc_integrity 00:04:22.689 ************************************ 00:04:22.946 00:11:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.946 00:11:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.946 00:11:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.946 00:11:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 ************************************ 00:04:22.946 START TEST rpc_plugins 00:04:22.946 ************************************ 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.946 { 00:04:22.946 "name": "Malloc1", 00:04:22.946 "aliases": [ 00:04:22.946 "9d5ff821-939a-40ac-ba5e-224cdff7859f" 00:04:22.946 ], 00:04:22.946 "product_name": "Malloc disk", 00:04:22.946 "block_size": 4096, 00:04:22.946 "num_blocks": 256, 00:04:22.946 "uuid": "9d5ff821-939a-40ac-ba5e-224cdff7859f", 00:04:22.946 "assigned_rate_limits": { 00:04:22.946 "rw_ios_per_sec": 0, 00:04:22.946 "rw_mbytes_per_sec": 0, 00:04:22.946 "r_mbytes_per_sec": 0, 00:04:22.946 "w_mbytes_per_sec": 0 00:04:22.946 }, 00:04:22.946 "claimed": false, 00:04:22.946 "zoned": false, 00:04:22.946 "supported_io_types": { 00:04:22.946 "read": true, 00:04:22.946 "write": true, 00:04:22.946 "unmap": true, 00:04:22.946 "flush": true, 00:04:22.946 "reset": true, 00:04:22.946 "nvme_admin": false, 00:04:22.946 "nvme_io": false, 00:04:22.946 "nvme_io_md": false, 00:04:22.946 "write_zeroes": true, 00:04:22.946 "zcopy": true, 00:04:22.946 "get_zone_info": false, 00:04:22.946 "zone_management": false, 00:04:22.946 "zone_append": false, 00:04:22.946 "compare": false, 00:04:22.946 "compare_and_write": false, 00:04:22.946 "abort": true, 00:04:22.946 "seek_hole": false, 00:04:22.946 "seek_data": false, 00:04:22.946 "copy": true, 00:04:22.946 "nvme_iov_md": false 00:04:22.946 }, 00:04:22.946 "memory_domains": [ 00:04:22.946 { 00:04:22.946 "dma_device_id": "system", 00:04:22.946 "dma_device_type": 1 00:04:22.946 }, 00:04:22.946 { 00:04:22.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.946 "dma_device_type": 2 00:04:22.946 } 00:04:22.946 ], 00:04:22.946 "driver_specific": {} 00:04:22.946 } 00:04:22.946 ]' 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:22.946 00:11:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:22.946 00:04:22.946 real 0m0.140s 00:04:22.946 user 0m0.086s 00:04:22.946 sys 0m0.018s 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.946 00:11:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 ************************************ 00:04:22.946 END TEST rpc_plugins 00:04:22.946 ************************************ 00:04:22.946 00:11:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:22.946 00:11:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.946 00:11:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.946 00:11:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.946 ************************************ 00:04:22.946 START TEST rpc_trace_cmd_test 00:04:22.946 ************************************ 00:04:22.946 00:11:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:22.946 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:22.946 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:22.946 00:11:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.946 00:11:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.215 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1996829", 00:04:23.215 "tpoint_group_mask": "0x8", 00:04:23.215 "iscsi_conn": { 00:04:23.215 "mask": "0x2", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "scsi": { 00:04:23.215 "mask": "0x4", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "bdev": { 00:04:23.215 "mask": "0x8", 00:04:23.215 "tpoint_mask": "0xffffffffffffffff" 00:04:23.215 }, 00:04:23.215 "nvmf_rdma": { 00:04:23.215 "mask": "0x10", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "nvmf_tcp": { 00:04:23.215 "mask": "0x20", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "ftl": { 00:04:23.215 "mask": "0x40", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "blobfs": { 00:04:23.215 "mask": "0x80", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "dsa": { 00:04:23.215 "mask": "0x200", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "thread": { 00:04:23.215 "mask": "0x400", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "nvme_pcie": { 00:04:23.215 "mask": "0x800", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "iaa": { 00:04:23.215 "mask": "0x1000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "nvme_tcp": { 00:04:23.215 "mask": "0x2000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "bdev_nvme": { 00:04:23.215 "mask": "0x4000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "sock": { 00:04:23.215 "mask": "0x8000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "blob": { 00:04:23.215 "mask": "0x10000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "bdev_raid": { 00:04:23.215 "mask": "0x20000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 }, 00:04:23.215 "scheduler": { 00:04:23.215 "mask": "0x40000", 00:04:23.215 "tpoint_mask": "0x0" 00:04:23.215 } 00:04:23.215 }' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.215 00:04:23.215 real 0m0.203s 00:04:23.215 user 0m0.164s 00:04:23.215 sys 0m0.028s 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.215 00:11:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 ************************************ 00:04:23.215 END TEST rpc_trace_cmd_test 00:04:23.215 ************************************ 00:04:23.215 00:11:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.215 00:11:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.215 00:11:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.215 00:11:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.215 00:11:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.215 00:11:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.215 ************************************ 00:04:23.215 START TEST rpc_daemon_integrity 00:04:23.215 ************************************ 00:04:23.215 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:23.215 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.215 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.215 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.476 { 00:04:23.476 "name": "Malloc2", 00:04:23.476 "aliases": [ 00:04:23.476 "47b604d1-5d0f-4451-8d59-778980809b73" 00:04:23.476 ], 00:04:23.476 "product_name": "Malloc disk", 00:04:23.476 "block_size": 512, 00:04:23.476 "num_blocks": 16384, 00:04:23.476 "uuid": "47b604d1-5d0f-4451-8d59-778980809b73", 00:04:23.476 "assigned_rate_limits": { 00:04:23.476 "rw_ios_per_sec": 0, 00:04:23.476 "rw_mbytes_per_sec": 0, 00:04:23.476 "r_mbytes_per_sec": 0, 00:04:23.476 "w_mbytes_per_sec": 0 00:04:23.476 }, 00:04:23.476 "claimed": false, 00:04:23.476 "zoned": false, 00:04:23.476 "supported_io_types": { 00:04:23.476 "read": true, 00:04:23.476 "write": true, 00:04:23.476 "unmap": true, 00:04:23.476 "flush": true, 00:04:23.476 "reset": true, 00:04:23.476 "nvme_admin": false, 00:04:23.476 "nvme_io": false, 00:04:23.476 "nvme_io_md": false, 00:04:23.476 "write_zeroes": true, 00:04:23.476 "zcopy": true, 00:04:23.476 "get_zone_info": false, 00:04:23.476 "zone_management": false, 00:04:23.476 "zone_append": false, 00:04:23.476 "compare": false, 00:04:23.476 "compare_and_write": false, 00:04:23.476 "abort": true, 00:04:23.476 "seek_hole": false, 00:04:23.476 "seek_data": false, 00:04:23.476 "copy": true, 00:04:23.476 "nvme_iov_md": false 00:04:23.476 }, 00:04:23.476 "memory_domains": [ 00:04:23.476 { 00:04:23.476 "dma_device_id": "system", 00:04:23.476 "dma_device_type": 1 00:04:23.476 }, 00:04:23.476 { 00:04:23.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.476 "dma_device_type": 2 00:04:23.476 } 00:04:23.476 ], 00:04:23.476 "driver_specific": {} 00:04:23.476 } 00:04:23.476 ]' 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.476 [2024-10-09 00:11:53.981461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:23.476 [2024-10-09 00:11:53.981498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.476 [2024-10-09 00:11:53.981517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023180 00:04:23.476 [2024-10-09 00:11:53.981527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.476 [2024-10-09 00:11:53.983464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.476 [2024-10-09 00:11:53.983489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.476 Passthru0 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.476 00:11:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.476 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.476 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.476 { 00:04:23.476 "name": "Malloc2", 00:04:23.476 "aliases": [ 00:04:23.476 "47b604d1-5d0f-4451-8d59-778980809b73" 00:04:23.476 ], 00:04:23.476 "product_name": "Malloc disk", 00:04:23.476 "block_size": 512, 00:04:23.476 "num_blocks": 16384, 00:04:23.476 "uuid": "47b604d1-5d0f-4451-8d59-778980809b73", 00:04:23.476 "assigned_rate_limits": { 00:04:23.476 "rw_ios_per_sec": 0, 00:04:23.476 "rw_mbytes_per_sec": 0, 00:04:23.476 "r_mbytes_per_sec": 0, 00:04:23.476 "w_mbytes_per_sec": 0 00:04:23.476 }, 00:04:23.476 "claimed": true, 00:04:23.476 "claim_type": "exclusive_write", 00:04:23.476 "zoned": false, 00:04:23.476 "supported_io_types": { 00:04:23.476 "read": true, 00:04:23.476 "write": true, 00:04:23.476 "unmap": true, 00:04:23.476 "flush": true, 00:04:23.476 "reset": true, 00:04:23.476 "nvme_admin": false, 00:04:23.477 "nvme_io": false, 00:04:23.477 "nvme_io_md": false, 00:04:23.477 "write_zeroes": true, 00:04:23.477 "zcopy": true, 00:04:23.477 "get_zone_info": false, 00:04:23.477 "zone_management": false, 00:04:23.477 "zone_append": false, 00:04:23.477 "compare": false, 00:04:23.477 "compare_and_write": false, 00:04:23.477 "abort": true, 00:04:23.477 "seek_hole": false, 00:04:23.477 "seek_data": false, 00:04:23.477 "copy": true, 00:04:23.477 "nvme_iov_md": false 00:04:23.477 }, 00:04:23.477 "memory_domains": [ 00:04:23.477 { 00:04:23.477 "dma_device_id": "system", 00:04:23.477 "dma_device_type": 1 00:04:23.477 }, 00:04:23.477 { 00:04:23.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.477 "dma_device_type": 2 00:04:23.477 } 00:04:23.477 ], 00:04:23.477 "driver_specific": {} 00:04:23.477 }, 00:04:23.477 { 00:04:23.477 "name": "Passthru0", 00:04:23.477 "aliases": [ 00:04:23.477 "ba9c3efb-7a24-5c3a-afd9-568e8e6996be" 00:04:23.477 ], 00:04:23.477 "product_name": "passthru", 00:04:23.477 "block_size": 512, 00:04:23.477 "num_blocks": 16384, 00:04:23.477 "uuid": "ba9c3efb-7a24-5c3a-afd9-568e8e6996be", 00:04:23.477 "assigned_rate_limits": { 00:04:23.477 "rw_ios_per_sec": 0, 00:04:23.477 "rw_mbytes_per_sec": 0, 00:04:23.477 "r_mbytes_per_sec": 0, 00:04:23.477 "w_mbytes_per_sec": 0 00:04:23.477 }, 00:04:23.477 "claimed": false, 00:04:23.477 "zoned": false, 00:04:23.477 "supported_io_types": { 00:04:23.477 "read": true, 00:04:23.477 "write": true, 00:04:23.477 "unmap": true, 00:04:23.477 "flush": true, 00:04:23.477 "reset": true, 00:04:23.477 "nvme_admin": false, 00:04:23.477 "nvme_io": false, 00:04:23.477 "nvme_io_md": false, 00:04:23.477 "write_zeroes": true, 00:04:23.477 "zcopy": true, 00:04:23.477 "get_zone_info": false, 00:04:23.477 "zone_management": false, 00:04:23.477 "zone_append": false, 00:04:23.477 "compare": false, 00:04:23.477 "compare_and_write": false, 00:04:23.477 "abort": true, 00:04:23.477 "seek_hole": false, 00:04:23.477 "seek_data": false, 00:04:23.477 "copy": true, 00:04:23.477 "nvme_iov_md": false 00:04:23.477 }, 00:04:23.477 "memory_domains": [ 00:04:23.477 { 00:04:23.477 "dma_device_id": "system", 00:04:23.477 "dma_device_type": 1 00:04:23.477 }, 00:04:23.477 { 00:04:23.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.477 "dma_device_type": 2 00:04:23.477 } 00:04:23.477 ], 00:04:23.477 "driver_specific": { 00:04:23.477 "passthru": { 00:04:23.477 "name": "Passthru0", 00:04:23.477 "base_bdev_name": "Malloc2" 00:04:23.477 } 00:04:23.477 } 00:04:23.477 } 00:04:23.477 ]' 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:23.477 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.735 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:23.735 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.735 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.735 00:11:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.735 00:04:23.735 real 0m0.308s 00:04:23.735 user 0m0.179s 00:04:23.735 sys 0m0.031s 00:04:23.735 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.735 00:11:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.735 ************************************ 00:04:23.735 END TEST rpc_daemon_integrity 00:04:23.735 ************************************ 00:04:23.735 00:11:54 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.735 00:11:54 rpc -- rpc/rpc.sh@84 -- # killprocess 1996829 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@950 -- # '[' -z 1996829 ']' 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@954 -- # kill -0 1996829 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@955 -- # uname 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996829 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996829' 00:04:23.735 killing process with pid 1996829 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@969 -- # kill 1996829 00:04:23.735 00:11:54 rpc -- common/autotest_common.sh@974 -- # wait 1996829 00:04:26.265 00:04:26.265 real 0m5.071s 00:04:26.265 user 0m5.656s 00:04:26.265 sys 0m0.823s 00:04:26.265 00:11:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.265 00:11:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.265 ************************************ 00:04:26.265 END TEST rpc 00:04:26.265 ************************************ 00:04:26.265 00:11:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:26.265 00:11:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.265 00:11:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.265 00:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:26.265 ************************************ 00:04:26.265 START TEST skip_rpc 00:04:26.265 ************************************ 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:26.265 * Looking for test storage... 00:04:26.265 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.265 00:11:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 --rc geninfo_unexecuted_blocks=1 00:04:26.265 00:04:26.265 ' 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 --rc geninfo_unexecuted_blocks=1 00:04:26.265 00:04:26.265 ' 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 --rc geninfo_unexecuted_blocks=1 00:04:26.265 00:04:26.265 ' 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.265 --rc genhtml_branch_coverage=1 00:04:26.265 --rc genhtml_function_coverage=1 00:04:26.265 --rc genhtml_legend=1 00:04:26.265 --rc geninfo_all_blocks=1 00:04:26.265 --rc geninfo_unexecuted_blocks=1 00:04:26.265 00:04:26.265 ' 00:04:26.265 00:11:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json 00:04:26.265 00:11:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt 00:04:26.265 00:11:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.265 00:11:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.524 ************************************ 00:04:26.524 START TEST skip_rpc 00:04:26.524 ************************************ 00:04:26.524 00:11:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:26.524 00:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1997902 00:04:26.524 00:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.524 00:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:26.524 00:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:26.524 [2024-10-09 00:11:57.011259] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:26.524 [2024-10-09 00:11:57.011335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997902 ] 00:04:26.524 [2024-10-09 00:11:57.113761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.782 [2024-10-09 00:11:57.308904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1997902 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1997902 ']' 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1997902 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1997902 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1997902' 00:04:32.047 killing process with pid 1997902 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1997902 00:04:32.047 00:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1997902 00:04:33.949 00:04:33.949 real 0m7.529s 00:04:33.949 user 0m7.137s 00:04:33.949 sys 0m0.422s 00:04:33.949 00:12:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.949 00:12:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.949 ************************************ 00:04:33.949 END TEST skip_rpc 00:04:33.949 ************************************ 00:04:33.949 00:12:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:33.949 00:12:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.949 00:12:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.949 00:12:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.949 ************************************ 00:04:33.949 START TEST skip_rpc_with_json 00:04:33.949 ************************************ 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1999192 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1999192 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1999192 ']' 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.949 00:12:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.207 [2024-10-09 00:12:04.614863] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:34.207 [2024-10-09 00:12:04.614952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999192 ] 00:04:34.207 [2024-10-09 00:12:04.719053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.465 [2024-10-09 00:12:04.910743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.408 [2024-10-09 00:12:05.714850] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:35.408 request: 00:04:35.408 { 00:04:35.408 "trtype": "tcp", 00:04:35.408 "method": "nvmf_get_transports", 00:04:35.408 "req_id": 1 00:04:35.408 } 00:04:35.408 Got JSON-RPC error response 00:04:35.408 response: 00:04:35.408 { 00:04:35.408 "code": -19, 00:04:35.408 "message": "No such device" 00:04:35.408 } 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.408 [2024-10-09 00:12:05.726981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.408 00:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json 00:04:35.408 { 00:04:35.408 "subsystems": [ 00:04:35.408 { 00:04:35.408 "subsystem": "fsdev", 00:04:35.408 "config": [ 00:04:35.408 { 00:04:35.408 "method": "fsdev_set_opts", 00:04:35.408 "params": { 00:04:35.408 "fsdev_io_pool_size": 65535, 00:04:35.408 "fsdev_io_cache_size": 256 00:04:35.408 } 00:04:35.408 } 00:04:35.408 ] 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "vfio_user_target", 00:04:35.408 "config": null 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "keyring", 00:04:35.408 "config": [] 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "iobuf", 00:04:35.408 "config": [ 00:04:35.408 { 00:04:35.408 "method": "iobuf_set_options", 00:04:35.408 "params": { 00:04:35.408 "small_pool_count": 8192, 00:04:35.408 "large_pool_count": 1024, 00:04:35.408 "small_bufsize": 8192, 00:04:35.408 "large_bufsize": 135168 00:04:35.408 } 00:04:35.408 } 00:04:35.408 ] 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "sock", 00:04:35.408 "config": [ 00:04:35.408 { 00:04:35.408 "method": "sock_set_default_impl", 00:04:35.408 "params": { 00:04:35.408 "impl_name": "posix" 00:04:35.408 } 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "method": "sock_impl_set_options", 00:04:35.408 "params": { 00:04:35.408 "impl_name": "ssl", 00:04:35.408 "recv_buf_size": 4096, 00:04:35.408 "send_buf_size": 4096, 00:04:35.408 "enable_recv_pipe": true, 00:04:35.408 "enable_quickack": false, 00:04:35.408 "enable_placement_id": 0, 00:04:35.408 "enable_zerocopy_send_server": true, 00:04:35.408 "enable_zerocopy_send_client": false, 00:04:35.408 "zerocopy_threshold": 0, 00:04:35.408 "tls_version": 0, 00:04:35.408 "enable_ktls": false 00:04:35.408 } 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "method": "sock_impl_set_options", 00:04:35.408 "params": { 00:04:35.408 "impl_name": "posix", 00:04:35.408 "recv_buf_size": 2097152, 00:04:35.408 "send_buf_size": 2097152, 00:04:35.408 "enable_recv_pipe": true, 00:04:35.408 "enable_quickack": false, 00:04:35.408 "enable_placement_id": 0, 00:04:35.408 "enable_zerocopy_send_server": true, 00:04:35.408 "enable_zerocopy_send_client": false, 00:04:35.408 "zerocopy_threshold": 0, 00:04:35.408 "tls_version": 0, 00:04:35.408 "enable_ktls": false 00:04:35.408 } 00:04:35.408 } 00:04:35.408 ] 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "vmd", 00:04:35.408 "config": [] 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "accel", 00:04:35.408 "config": [ 00:04:35.408 { 00:04:35.408 "method": "accel_set_options", 00:04:35.408 "params": { 00:04:35.408 "small_cache_size": 128, 00:04:35.408 "large_cache_size": 16, 00:04:35.408 "task_count": 2048, 00:04:35.408 "sequence_count": 2048, 00:04:35.408 "buf_count": 2048 00:04:35.408 } 00:04:35.408 } 00:04:35.408 ] 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "subsystem": "bdev", 00:04:35.408 "config": [ 00:04:35.408 { 00:04:35.408 "method": "bdev_set_options", 00:04:35.408 "params": { 00:04:35.408 "bdev_io_pool_size": 65535, 00:04:35.408 "bdev_io_cache_size": 256, 00:04:35.408 "bdev_auto_examine": true, 00:04:35.408 "iobuf_small_cache_size": 128, 00:04:35.408 "iobuf_large_cache_size": 16 00:04:35.408 } 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "method": "bdev_raid_set_options", 00:04:35.408 "params": { 00:04:35.408 "process_window_size_kb": 1024, 00:04:35.408 "process_max_bandwidth_mb_sec": 0 00:04:35.408 } 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "method": "bdev_iscsi_set_options", 00:04:35.408 "params": { 00:04:35.408 "timeout_sec": 30 00:04:35.408 } 00:04:35.408 }, 00:04:35.408 { 00:04:35.408 "method": "bdev_nvme_set_options", 00:04:35.408 "params": { 00:04:35.408 "action_on_timeout": "none", 00:04:35.408 "timeout_us": 0, 00:04:35.408 "timeout_admin_us": 0, 00:04:35.408 "keep_alive_timeout_ms": 10000, 00:04:35.408 "arbitration_burst": 0, 00:04:35.408 "low_priority_weight": 0, 00:04:35.408 "medium_priority_weight": 0, 00:04:35.408 "high_priority_weight": 0, 00:04:35.408 "nvme_adminq_poll_period_us": 10000, 00:04:35.408 "nvme_ioq_poll_period_us": 0, 00:04:35.408 "io_queue_requests": 0, 00:04:35.408 "delay_cmd_submit": true, 00:04:35.408 "transport_retry_count": 4, 00:04:35.408 "bdev_retry_count": 3, 00:04:35.408 "transport_ack_timeout": 0, 00:04:35.408 "ctrlr_loss_timeout_sec": 0, 00:04:35.408 "reconnect_delay_sec": 0, 00:04:35.408 "fast_io_fail_timeout_sec": 0, 00:04:35.408 "disable_auto_failback": false, 00:04:35.409 "generate_uuids": false, 00:04:35.409 "transport_tos": 0, 00:04:35.409 "nvme_error_stat": false, 00:04:35.409 "rdma_srq_size": 0, 00:04:35.409 "io_path_stat": false, 00:04:35.409 "allow_accel_sequence": false, 00:04:35.409 "rdma_max_cq_size": 0, 00:04:35.409 "rdma_cm_event_timeout_ms": 0, 00:04:35.409 "dhchap_digests": [ 00:04:35.409 "sha256", 00:04:35.409 "sha384", 00:04:35.409 "sha512" 00:04:35.409 ], 00:04:35.409 "dhchap_dhgroups": [ 00:04:35.409 "null", 00:04:35.409 "ffdhe2048", 00:04:35.409 "ffdhe3072", 00:04:35.409 "ffdhe4096", 00:04:35.409 "ffdhe6144", 00:04:35.409 "ffdhe8192" 00:04:35.409 ] 00:04:35.409 } 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "method": "bdev_nvme_set_hotplug", 00:04:35.409 "params": { 00:04:35.409 "period_us": 100000, 00:04:35.409 "enable": false 00:04:35.409 } 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "method": "bdev_wait_for_examine" 00:04:35.409 } 00:04:35.409 ] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "scsi", 00:04:35.409 "config": null 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "scheduler", 00:04:35.409 "config": [ 00:04:35.409 { 00:04:35.409 "method": "framework_set_scheduler", 00:04:35.409 "params": { 00:04:35.409 "name": "static" 00:04:35.409 } 00:04:35.409 } 00:04:35.409 ] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "vhost_scsi", 00:04:35.409 "config": [] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "vhost_blk", 00:04:35.409 "config": [] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "ublk", 00:04:35.409 "config": [] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "nbd", 00:04:35.409 "config": [] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "nvmf", 00:04:35.409 "config": [ 00:04:35.409 { 00:04:35.409 "method": "nvmf_set_config", 00:04:35.409 "params": { 00:04:35.409 "discovery_filter": "match_any", 00:04:35.409 "admin_cmd_passthru": { 00:04:35.409 "identify_ctrlr": false 00:04:35.409 }, 00:04:35.409 "dhchap_digests": [ 00:04:35.409 "sha256", 00:04:35.409 "sha384", 00:04:35.409 "sha512" 00:04:35.409 ], 00:04:35.409 "dhchap_dhgroups": [ 00:04:35.409 "null", 00:04:35.409 "ffdhe2048", 00:04:35.409 "ffdhe3072", 00:04:35.409 "ffdhe4096", 00:04:35.409 "ffdhe6144", 00:04:35.409 "ffdhe8192" 00:04:35.409 ] 00:04:35.409 } 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "method": "nvmf_set_max_subsystems", 00:04:35.409 "params": { 00:04:35.409 "max_subsystems": 1024 00:04:35.409 } 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "method": "nvmf_set_crdt", 00:04:35.409 "params": { 00:04:35.409 "crdt1": 0, 00:04:35.409 "crdt2": 0, 00:04:35.409 "crdt3": 0 00:04:35.409 } 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "method": "nvmf_create_transport", 00:04:35.409 "params": { 00:04:35.409 "trtype": "TCP", 00:04:35.409 "max_queue_depth": 128, 00:04:35.409 "max_io_qpairs_per_ctrlr": 127, 00:04:35.409 "in_capsule_data_size": 4096, 00:04:35.409 "max_io_size": 131072, 00:04:35.409 "io_unit_size": 131072, 00:04:35.409 "max_aq_depth": 128, 00:04:35.409 "num_shared_buffers": 511, 00:04:35.409 "buf_cache_size": 4294967295, 00:04:35.409 "dif_insert_or_strip": false, 00:04:35.409 "zcopy": false, 00:04:35.409 "c2h_success": true, 00:04:35.409 "sock_priority": 0, 00:04:35.409 "abort_timeout_sec": 1, 00:04:35.409 "ack_timeout": 0, 00:04:35.409 "data_wr_pool_size": 0 00:04:35.409 } 00:04:35.409 } 00:04:35.409 ] 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "subsystem": "iscsi", 00:04:35.409 "config": [ 00:04:35.409 { 00:04:35.410 "method": "iscsi_set_options", 00:04:35.410 "params": { 00:04:35.410 "node_base": "iqn.2016-06.io.spdk", 00:04:35.410 "max_sessions": 128, 00:04:35.410 "max_connections_per_session": 2, 00:04:35.410 "max_queue_depth": 64, 00:04:35.410 "default_time2wait": 2, 00:04:35.410 "default_time2retain": 20, 00:04:35.410 "first_burst_length": 8192, 00:04:35.410 "immediate_data": true, 00:04:35.410 "allow_duplicated_isid": false, 00:04:35.410 "error_recovery_level": 0, 00:04:35.410 "nop_timeout": 60, 00:04:35.410 "nop_in_interval": 30, 00:04:35.410 "disable_chap": false, 00:04:35.410 "require_chap": false, 00:04:35.410 "mutual_chap": false, 00:04:35.410 "chap_group": 0, 00:04:35.410 "max_large_datain_per_connection": 64, 00:04:35.410 "max_r2t_per_connection": 4, 00:04:35.410 "pdu_pool_size": 36864, 00:04:35.410 "immediate_data_pool_size": 16384, 00:04:35.410 "data_out_pool_size": 2048 00:04:35.410 } 00:04:35.410 } 00:04:35.410 ] 00:04:35.410 } 00:04:35.410 ] 00:04:35.410 } 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1999192 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1999192 ']' 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1999192 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1999192 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1999192' 00:04:35.410 killing process with pid 1999192 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1999192 00:04:35.410 00:12:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1999192 00:04:37.945 00:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1999864 00:04:37.945 00:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:37.945 00:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1999864 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1999864 ']' 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1999864 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1999864 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1999864' 00:04:43.212 killing process with pid 1999864 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1999864 00:04:43.212 00:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1999864 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/log.txt 00:04:45.739 00:04:45.739 real 0m11.346s 00:04:45.739 user 0m10.924s 00:04:45.739 sys 0m0.878s 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.739 ************************************ 00:04:45.739 END TEST skip_rpc_with_json 00:04:45.739 ************************************ 00:04:45.739 00:12:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:45.739 00:12:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.739 00:12:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.739 00:12:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.739 ************************************ 00:04:45.739 START TEST skip_rpc_with_delay 00:04:45.739 ************************************ 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.739 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.740 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.740 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.740 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:45.740 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:45.740 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:45.740 00:12:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:45.740 [2024-10-09 00:12:16.025147] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:45.740 [2024-10-09 00:12:16.025242] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:45.740 00:12:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:45.740 00:12:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:45.740 00:12:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:45.740 00:12:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:45.740 00:04:45.740 real 0m0.143s 00:04:45.740 user 0m0.080s 00:04:45.740 sys 0m0.062s 00:04:45.740 00:12:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.740 00:12:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:45.740 ************************************ 00:04:45.740 END TEST skip_rpc_with_delay 00:04:45.740 ************************************ 00:04:45.740 00:12:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:45.740 00:12:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:45.740 00:12:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:45.740 00:12:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.740 00:12:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.740 00:12:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.740 ************************************ 00:04:45.740 START TEST exit_on_failed_rpc_init 00:04:45.740 ************************************ 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2001650 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2001650 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2001650 ']' 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.740 00:12:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.740 [2024-10-09 00:12:16.243803] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:45.740 [2024-10-09 00:12:16.243908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001650 ] 00:04:45.740 [2024-10-09 00:12:16.348693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.997 [2024-10-09 00:12:16.541890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:46.936 00:12:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:46.936 [2024-10-09 00:12:17.432765] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:46.936 [2024-10-09 00:12:17.432865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001877 ] 00:04:46.936 [2024-10-09 00:12:17.533662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.194 [2024-10-09 00:12:17.731802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.194 [2024-10-09 00:12:17.731881] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:47.194 [2024-10-09 00:12:17.731897] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:47.194 [2024-10-09 00:12:17.731909] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2001650 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2001650 ']' 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2001650 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2001650 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2001650' 00:04:47.759 killing process with pid 2001650 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2001650 00:04:47.759 00:12:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2001650 00:04:50.285 00:04:50.285 real 0m4.475s 00:04:50.285 user 0m5.041s 00:04:50.285 sys 0m0.628s 00:04:50.285 00:12:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.285 00:12:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.285 ************************************ 00:04:50.285 END TEST exit_on_failed_rpc_init 00:04:50.285 ************************************ 00:04:50.285 00:12:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc/config.json 00:04:50.285 00:04:50.285 real 0m23.946s 00:04:50.285 user 0m23.396s 00:04:50.285 sys 0m2.256s 00:04:50.285 00:12:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.285 00:12:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.285 ************************************ 00:04:50.285 END TEST skip_rpc 00:04:50.285 ************************************ 00:04:50.285 00:12:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.285 00:12:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.285 00:12:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.285 00:12:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.285 ************************************ 00:04:50.285 START TEST rpc_client 00:04:50.285 ************************************ 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.285 * Looking for test storage... 00:04:50.285 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.285 00:12:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.285 --rc genhtml_branch_coverage=1 00:04:50.285 --rc genhtml_function_coverage=1 00:04:50.285 --rc genhtml_legend=1 00:04:50.285 --rc geninfo_all_blocks=1 00:04:50.285 --rc geninfo_unexecuted_blocks=1 00:04:50.285 00:04:50.285 ' 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.285 --rc genhtml_branch_coverage=1 00:04:50.285 --rc genhtml_function_coverage=1 00:04:50.285 --rc genhtml_legend=1 00:04:50.285 --rc geninfo_all_blocks=1 00:04:50.285 --rc geninfo_unexecuted_blocks=1 00:04:50.285 00:04:50.285 ' 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.285 --rc genhtml_branch_coverage=1 00:04:50.285 --rc genhtml_function_coverage=1 00:04:50.285 --rc genhtml_legend=1 00:04:50.285 --rc geninfo_all_blocks=1 00:04:50.285 --rc geninfo_unexecuted_blocks=1 00:04:50.285 00:04:50.285 ' 00:04:50.285 00:12:20 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:50.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.285 --rc genhtml_branch_coverage=1 00:04:50.285 --rc genhtml_function_coverage=1 00:04:50.285 --rc genhtml_legend=1 00:04:50.285 --rc geninfo_all_blocks=1 00:04:50.285 --rc geninfo_unexecuted_blocks=1 00:04:50.285 00:04:50.285 ' 00:04:50.285 00:12:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:50.543 OK 00:04:50.543 00:12:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.543 00:04:50.543 real 0m0.228s 00:04:50.543 user 0m0.125s 00:04:50.543 sys 0m0.114s 00:04:50.543 00:12:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.543 00:12:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.543 ************************************ 00:04:50.544 END TEST rpc_client 00:04:50.544 ************************************ 00:04:50.544 00:12:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.544 00:12:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.544 00:12:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.544 00:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.544 ************************************ 00:04:50.544 START TEST json_config 00:04:50.544 ************************************ 00:04:50.544 00:12:21 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.544 00:12:21 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:50.544 00:12:21 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:50.544 00:12:21 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:50.544 00:12:21 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:50.544 00:12:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.544 00:12:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.544 00:12:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.544 00:12:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.544 00:12:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.544 00:12:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.544 00:12:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.544 00:12:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.544 00:12:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.544 00:12:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.544 00:12:21 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.544 00:12:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.544 00:12:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.544 00:12:21 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.544 00:12:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.544 00:12:21 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.544 00:12:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.544 00:12:21 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.803 00:12:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.803 00:12:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.803 00:12:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.803 00:12:21 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.803 00:12:21 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.803 00:12:21 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.804 --rc genhtml_branch_coverage=1 00:04:50.804 --rc genhtml_function_coverage=1 00:04:50.804 --rc genhtml_legend=1 00:04:50.804 --rc geninfo_all_blocks=1 00:04:50.804 --rc geninfo_unexecuted_blocks=1 00:04:50.804 00:04:50.804 ' 00:04:50.804 00:12:21 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.804 --rc genhtml_branch_coverage=1 00:04:50.804 --rc genhtml_function_coverage=1 00:04:50.804 --rc genhtml_legend=1 00:04:50.804 --rc geninfo_all_blocks=1 00:04:50.804 --rc geninfo_unexecuted_blocks=1 00:04:50.804 00:04:50.804 ' 00:04:50.804 00:12:21 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.804 --rc genhtml_branch_coverage=1 00:04:50.804 --rc genhtml_function_coverage=1 00:04:50.804 --rc genhtml_legend=1 00:04:50.804 --rc geninfo_all_blocks=1 00:04:50.804 --rc geninfo_unexecuted_blocks=1 00:04:50.804 00:04:50.804 ' 00:04:50.804 00:12:21 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.804 --rc genhtml_branch_coverage=1 00:04:50.804 --rc genhtml_function_coverage=1 00:04:50.804 --rc genhtml_legend=1 00:04:50.804 --rc geninfo_all_blocks=1 00:04:50.804 --rc geninfo_unexecuted_blocks=1 00:04:50.804 00:04:50.804 ' 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh 00:04:50.804 00:12:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.804 00:12:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.804 00:12:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.804 00:12:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.804 00:12:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.804 00:12:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.804 00:12:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.804 00:12:21 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.804 00:12:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.804 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.804 00:12:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:50.804 WARNING: No tests are enabled so not running JSON configuration tests 00:04:50.804 00:12:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:50.804 00:04:50.804 real 0m0.178s 00:04:50.804 user 0m0.112s 00:04:50.804 sys 0m0.071s 00:04:50.804 00:12:21 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.804 00:12:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.804 ************************************ 00:04:50.804 END TEST json_config 00:04:50.804 ************************************ 00:04:50.804 00:12:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.804 00:12:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.804 00:12:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.804 00:12:21 -- common/autotest_common.sh@10 -- # set +x 00:04:50.804 ************************************ 00:04:50.804 START TEST json_config_extra_key 00:04:50.804 ************************************ 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.804 00:12:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.804 --rc genhtml_branch_coverage=1 00:04:50.804 --rc genhtml_function_coverage=1 00:04:50.804 --rc genhtml_legend=1 00:04:50.804 --rc geninfo_all_blocks=1 00:04:50.804 --rc geninfo_unexecuted_blocks=1 00:04:50.804 00:04:50.804 ' 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.804 --rc genhtml_branch_coverage=1 00:04:50.804 --rc genhtml_function_coverage=1 00:04:50.804 --rc genhtml_legend=1 00:04:50.804 --rc geninfo_all_blocks=1 00:04:50.804 --rc geninfo_unexecuted_blocks=1 00:04:50.804 00:04:50.804 ' 00:04:50.804 00:12:21 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:50.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.805 --rc genhtml_branch_coverage=1 00:04:50.805 --rc genhtml_function_coverage=1 00:04:50.805 --rc genhtml_legend=1 00:04:50.805 --rc geninfo_all_blocks=1 00:04:50.805 --rc geninfo_unexecuted_blocks=1 00:04:50.805 00:04:50.805 ' 00:04:50.805 00:12:21 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:50.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.805 --rc genhtml_branch_coverage=1 00:04:50.805 --rc genhtml_function_coverage=1 00:04:50.805 --rc genhtml_legend=1 00:04:50.805 --rc geninfo_all_blocks=1 00:04:50.805 --rc geninfo_unexecuted_blocks=1 00:04:50.805 00:04:50.805 ' 00:04:50.805 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.805 00:12:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.064 00:12:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh 00:04:51.065 00:12:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:51.065 00:12:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.065 00:12:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.065 00:12:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.065 00:12:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.065 00:12:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.065 00:12:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.065 00:12:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.065 00:12:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:51.065 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:51.065 00:12:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/common.sh 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.065 INFO: launching applications... 00:04:51.065 00:12:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2002724 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.065 Waiting for target to run... 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2002724 /var/tmp/spdk_tgt.sock 00:04:51.065 00:12:21 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2002724 ']' 00:04:51.065 00:12:21 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.065 00:12:21 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.065 00:12:21 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.065 00:12:21 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.065 00:12:21 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.065 00:12:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.065 [2024-10-09 00:12:21.555901] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:51.065 [2024-10-09 00:12:21.555984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002724 ] 00:04:51.631 [2024-10-09 00:12:22.036508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.631 [2024-10-09 00:12:22.245193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.564 00:12:22 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.564 00:12:22 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:52.564 00:04:52.564 00:12:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:52.564 INFO: shutting down applications... 00:04:52.564 00:12:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2002724 ]] 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2002724 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:52.564 00:12:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.822 00:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.822 00:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.822 00:12:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:52.822 00:12:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.387 00:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.387 00:12:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.388 00:12:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:53.388 00:12:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.958 00:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.958 00:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.958 00:12:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:53.958 00:12:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.523 00:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.523 00:12:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.523 00:12:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:54.523 00:12:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.173 00:12:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.173 00:12:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.173 00:12:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:55.173 00:12:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2002724 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.506 00:12:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.506 SPDK target shutdown done 00:04:55.506 00:12:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:55.506 Success 00:04:55.506 00:04:55.506 real 0m4.655s 00:04:55.506 user 0m3.999s 00:04:55.506 sys 0m0.678s 00:04:55.506 00:12:25 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.506 00:12:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.506 ************************************ 00:04:55.506 END TEST json_config_extra_key 00:04:55.506 ************************************ 00:04:55.506 00:12:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.506 00:12:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.506 00:12:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.506 00:12:25 -- common/autotest_common.sh@10 -- # set +x 00:04:55.506 ************************************ 00:04:55.506 START TEST alias_rpc 00:04:55.506 ************************************ 00:04:55.506 00:12:26 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:55.506 * Looking for test storage... 00:04:55.506 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/alias_rpc 00:04:55.506 00:12:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:55.506 00:12:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:55.506 00:12:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.765 00:12:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:55.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.765 --rc genhtml_branch_coverage=1 00:04:55.765 --rc genhtml_function_coverage=1 00:04:55.765 --rc genhtml_legend=1 00:04:55.765 --rc geninfo_all_blocks=1 00:04:55.765 --rc geninfo_unexecuted_blocks=1 00:04:55.765 00:04:55.765 ' 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:55.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.765 --rc genhtml_branch_coverage=1 00:04:55.765 --rc genhtml_function_coverage=1 00:04:55.765 --rc genhtml_legend=1 00:04:55.765 --rc geninfo_all_blocks=1 00:04:55.765 --rc geninfo_unexecuted_blocks=1 00:04:55.765 00:04:55.765 ' 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:55.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.765 --rc genhtml_branch_coverage=1 00:04:55.765 --rc genhtml_function_coverage=1 00:04:55.765 --rc genhtml_legend=1 00:04:55.765 --rc geninfo_all_blocks=1 00:04:55.765 --rc geninfo_unexecuted_blocks=1 00:04:55.765 00:04:55.765 ' 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:55.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.765 --rc genhtml_branch_coverage=1 00:04:55.765 --rc genhtml_function_coverage=1 00:04:55.765 --rc genhtml_legend=1 00:04:55.765 --rc geninfo_all_blocks=1 00:04:55.765 --rc geninfo_unexecuted_blocks=1 00:04:55.765 00:04:55.765 ' 00:04:55.765 00:12:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.765 00:12:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2003504 00:04:55.765 00:12:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2003504 00:04:55.765 00:12:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2003504 ']' 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.765 00:12:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.765 [2024-10-09 00:12:26.266333] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:04:55.765 [2024-10-09 00:12:26.266427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003504 ] 00:04:55.765 [2024-10-09 00:12:26.371604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.023 [2024-10-09 00:12:26.562188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:56.955 00:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:56.955 00:12:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2003504 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2003504 ']' 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2003504 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.955 00:12:27 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2003504 00:04:57.213 00:12:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:57.213 00:12:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:57.213 00:12:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2003504' 00:04:57.213 killing process with pid 2003504 00:04:57.213 00:12:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 2003504 00:04:57.213 00:12:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 2003504 00:04:59.749 00:04:59.749 real 0m4.061s 00:04:59.749 user 0m4.072s 00:04:59.749 sys 0m0.572s 00:04:59.749 00:12:30 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.749 00:12:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 ************************************ 00:04:59.749 END TEST alias_rpc 00:04:59.749 ************************************ 00:04:59.749 00:12:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:59.749 00:12:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.749 00:12:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.749 00:12:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.749 00:12:30 -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 ************************************ 00:04:59.749 START TEST spdkcli_tcp 00:04:59.749 ************************************ 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:59.749 * Looking for test storage... 00:04:59.749 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.749 00:12:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.749 --rc genhtml_branch_coverage=1 00:04:59.749 --rc genhtml_function_coverage=1 00:04:59.749 --rc genhtml_legend=1 00:04:59.749 --rc geninfo_all_blocks=1 00:04:59.749 --rc geninfo_unexecuted_blocks=1 00:04:59.749 00:04:59.749 ' 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.749 --rc genhtml_branch_coverage=1 00:04:59.749 --rc genhtml_function_coverage=1 00:04:59.749 --rc genhtml_legend=1 00:04:59.749 --rc geninfo_all_blocks=1 00:04:59.749 --rc geninfo_unexecuted_blocks=1 00:04:59.749 00:04:59.749 ' 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.749 --rc genhtml_branch_coverage=1 00:04:59.749 --rc genhtml_function_coverage=1 00:04:59.749 --rc genhtml_legend=1 00:04:59.749 --rc geninfo_all_blocks=1 00:04:59.749 --rc geninfo_unexecuted_blocks=1 00:04:59.749 00:04:59.749 ' 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.749 --rc genhtml_branch_coverage=1 00:04:59.749 --rc genhtml_function_coverage=1 00:04:59.749 --rc genhtml_legend=1 00:04:59.749 --rc geninfo_all_blocks=1 00:04:59.749 --rc geninfo_unexecuted_blocks=1 00:04:59.749 00:04:59.749 ' 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/common.sh 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/json_config/clear_config.py 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2004318 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2004318 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2004318 ']' 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.749 00:12:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.749 00:12:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.006 [2024-10-09 00:12:30.397020] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:00.006 [2024-10-09 00:12:30.397117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004318 ] 00:05:00.006 [2024-10-09 00:12:30.502362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.263 [2024-10-09 00:12:30.701152] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.263 [2024-10-09 00:12:30.701160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.195 00:12:31 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:01.195 00:12:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:01.195 00:12:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2004463 00:05:01.195 00:12:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.195 00:12:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.195 [ 00:05:01.195 "bdev_malloc_delete", 00:05:01.195 "bdev_malloc_create", 00:05:01.195 "bdev_null_resize", 00:05:01.195 "bdev_null_delete", 00:05:01.195 "bdev_null_create", 00:05:01.195 "bdev_nvme_cuse_unregister", 00:05:01.195 "bdev_nvme_cuse_register", 00:05:01.195 "bdev_opal_new_user", 00:05:01.195 "bdev_opal_set_lock_state", 00:05:01.195 "bdev_opal_delete", 00:05:01.195 "bdev_opal_get_info", 00:05:01.195 "bdev_opal_create", 00:05:01.195 "bdev_nvme_opal_revert", 00:05:01.196 "bdev_nvme_opal_init", 00:05:01.196 "bdev_nvme_send_cmd", 00:05:01.196 "bdev_nvme_set_keys", 00:05:01.196 "bdev_nvme_get_path_iostat", 00:05:01.196 "bdev_nvme_get_mdns_discovery_info", 00:05:01.196 "bdev_nvme_stop_mdns_discovery", 00:05:01.196 "bdev_nvme_start_mdns_discovery", 00:05:01.196 "bdev_nvme_set_multipath_policy", 00:05:01.196 "bdev_nvme_set_preferred_path", 00:05:01.196 "bdev_nvme_get_io_paths", 00:05:01.196 "bdev_nvme_remove_error_injection", 00:05:01.196 "bdev_nvme_add_error_injection", 00:05:01.196 "bdev_nvme_get_discovery_info", 00:05:01.196 "bdev_nvme_stop_discovery", 00:05:01.196 "bdev_nvme_start_discovery", 00:05:01.196 "bdev_nvme_get_controller_health_info", 00:05:01.196 "bdev_nvme_disable_controller", 00:05:01.196 "bdev_nvme_enable_controller", 00:05:01.196 "bdev_nvme_reset_controller", 00:05:01.196 "bdev_nvme_get_transport_statistics", 00:05:01.196 "bdev_nvme_apply_firmware", 00:05:01.196 "bdev_nvme_detach_controller", 00:05:01.196 "bdev_nvme_get_controllers", 00:05:01.196 "bdev_nvme_attach_controller", 00:05:01.196 "bdev_nvme_set_hotplug", 00:05:01.196 "bdev_nvme_set_options", 00:05:01.196 "bdev_passthru_delete", 00:05:01.196 "bdev_passthru_create", 00:05:01.196 "bdev_lvol_set_parent_bdev", 00:05:01.196 "bdev_lvol_set_parent", 00:05:01.196 "bdev_lvol_check_shallow_copy", 00:05:01.196 "bdev_lvol_start_shallow_copy", 00:05:01.196 "bdev_lvol_grow_lvstore", 00:05:01.196 "bdev_lvol_get_lvols", 00:05:01.196 "bdev_lvol_get_lvstores", 00:05:01.196 "bdev_lvol_delete", 00:05:01.196 "bdev_lvol_set_read_only", 00:05:01.196 "bdev_lvol_resize", 00:05:01.196 "bdev_lvol_decouple_parent", 00:05:01.196 "bdev_lvol_inflate", 00:05:01.196 "bdev_lvol_rename", 00:05:01.196 "bdev_lvol_clone_bdev", 00:05:01.196 "bdev_lvol_clone", 00:05:01.196 "bdev_lvol_snapshot", 00:05:01.196 "bdev_lvol_create", 00:05:01.196 "bdev_lvol_delete_lvstore", 00:05:01.196 "bdev_lvol_rename_lvstore", 00:05:01.196 "bdev_lvol_create_lvstore", 00:05:01.196 "bdev_raid_set_options", 00:05:01.196 "bdev_raid_remove_base_bdev", 00:05:01.196 "bdev_raid_add_base_bdev", 00:05:01.196 "bdev_raid_delete", 00:05:01.196 "bdev_raid_create", 00:05:01.196 "bdev_raid_get_bdevs", 00:05:01.196 "bdev_error_inject_error", 00:05:01.196 "bdev_error_delete", 00:05:01.196 "bdev_error_create", 00:05:01.196 "bdev_split_delete", 00:05:01.196 "bdev_split_create", 00:05:01.196 "bdev_delay_delete", 00:05:01.196 "bdev_delay_create", 00:05:01.196 "bdev_delay_update_latency", 00:05:01.196 "bdev_zone_block_delete", 00:05:01.196 "bdev_zone_block_create", 00:05:01.196 "blobfs_create", 00:05:01.196 "blobfs_detect", 00:05:01.196 "blobfs_set_cache_size", 00:05:01.196 "bdev_crypto_delete", 00:05:01.196 "bdev_crypto_create", 00:05:01.196 "bdev_aio_delete", 00:05:01.196 "bdev_aio_rescan", 00:05:01.196 "bdev_aio_create", 00:05:01.196 "bdev_ftl_set_property", 00:05:01.196 "bdev_ftl_get_properties", 00:05:01.196 "bdev_ftl_get_stats", 00:05:01.196 "bdev_ftl_unmap", 00:05:01.196 "bdev_ftl_unload", 00:05:01.196 "bdev_ftl_delete", 00:05:01.196 "bdev_ftl_load", 00:05:01.196 "bdev_ftl_create", 00:05:01.196 "bdev_virtio_attach_controller", 00:05:01.196 "bdev_virtio_scsi_get_devices", 00:05:01.196 "bdev_virtio_detach_controller", 00:05:01.196 "bdev_virtio_blk_set_hotplug", 00:05:01.196 "bdev_iscsi_delete", 00:05:01.196 "bdev_iscsi_create", 00:05:01.196 "bdev_iscsi_set_options", 00:05:01.196 "accel_error_inject_error", 00:05:01.196 "ioat_scan_accel_module", 00:05:01.196 "dsa_scan_accel_module", 00:05:01.196 "iaa_scan_accel_module", 00:05:01.196 "dpdk_cryptodev_get_driver", 00:05:01.196 "dpdk_cryptodev_set_driver", 00:05:01.196 "dpdk_cryptodev_scan_accel_module", 00:05:01.196 "vfu_virtio_create_fs_endpoint", 00:05:01.196 "vfu_virtio_create_scsi_endpoint", 00:05:01.196 "vfu_virtio_scsi_remove_target", 00:05:01.196 "vfu_virtio_scsi_add_target", 00:05:01.196 "vfu_virtio_create_blk_endpoint", 00:05:01.196 "vfu_virtio_delete_endpoint", 00:05:01.196 "keyring_file_remove_key", 00:05:01.196 "keyring_file_add_key", 00:05:01.196 "keyring_linux_set_options", 00:05:01.196 "fsdev_aio_delete", 00:05:01.196 "fsdev_aio_create", 00:05:01.196 "iscsi_get_histogram", 00:05:01.196 "iscsi_enable_histogram", 00:05:01.196 "iscsi_set_options", 00:05:01.196 "iscsi_get_auth_groups", 00:05:01.196 "iscsi_auth_group_remove_secret", 00:05:01.196 "iscsi_auth_group_add_secret", 00:05:01.196 "iscsi_delete_auth_group", 00:05:01.196 "iscsi_create_auth_group", 00:05:01.196 "iscsi_set_discovery_auth", 00:05:01.196 "iscsi_get_options", 00:05:01.196 "iscsi_target_node_request_logout", 00:05:01.196 "iscsi_target_node_set_redirect", 00:05:01.196 "iscsi_target_node_set_auth", 00:05:01.196 "iscsi_target_node_add_lun", 00:05:01.196 "iscsi_get_stats", 00:05:01.196 "iscsi_get_connections", 00:05:01.196 "iscsi_portal_group_set_auth", 00:05:01.196 "iscsi_start_portal_group", 00:05:01.196 "iscsi_delete_portal_group", 00:05:01.196 "iscsi_create_portal_group", 00:05:01.196 "iscsi_get_portal_groups", 00:05:01.196 "iscsi_delete_target_node", 00:05:01.196 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.196 "iscsi_target_node_add_pg_ig_maps", 00:05:01.196 "iscsi_create_target_node", 00:05:01.196 "iscsi_get_target_nodes", 00:05:01.196 "iscsi_delete_initiator_group", 00:05:01.196 "iscsi_initiator_group_remove_initiators", 00:05:01.196 "iscsi_initiator_group_add_initiators", 00:05:01.196 "iscsi_create_initiator_group", 00:05:01.196 "iscsi_get_initiator_groups", 00:05:01.196 "nvmf_set_crdt", 00:05:01.196 "nvmf_set_config", 00:05:01.196 "nvmf_set_max_subsystems", 00:05:01.196 "nvmf_stop_mdns_prr", 00:05:01.196 "nvmf_publish_mdns_prr", 00:05:01.196 "nvmf_subsystem_get_listeners", 00:05:01.196 "nvmf_subsystem_get_qpairs", 00:05:01.196 "nvmf_subsystem_get_controllers", 00:05:01.196 "nvmf_get_stats", 00:05:01.196 "nvmf_get_transports", 00:05:01.196 "nvmf_create_transport", 00:05:01.196 "nvmf_get_targets", 00:05:01.196 "nvmf_delete_target", 00:05:01.196 "nvmf_create_target", 00:05:01.196 "nvmf_subsystem_allow_any_host", 00:05:01.196 "nvmf_subsystem_set_keys", 00:05:01.196 "nvmf_subsystem_remove_host", 00:05:01.196 "nvmf_subsystem_add_host", 00:05:01.196 "nvmf_ns_remove_host", 00:05:01.196 "nvmf_ns_add_host", 00:05:01.196 "nvmf_subsystem_remove_ns", 00:05:01.196 "nvmf_subsystem_set_ns_ana_group", 00:05:01.196 "nvmf_subsystem_add_ns", 00:05:01.196 "nvmf_subsystem_listener_set_ana_state", 00:05:01.196 "nvmf_discovery_get_referrals", 00:05:01.196 "nvmf_discovery_remove_referral", 00:05:01.196 "nvmf_discovery_add_referral", 00:05:01.196 "nvmf_subsystem_remove_listener", 00:05:01.196 "nvmf_subsystem_add_listener", 00:05:01.196 "nvmf_delete_subsystem", 00:05:01.196 "nvmf_create_subsystem", 00:05:01.196 "nvmf_get_subsystems", 00:05:01.196 "env_dpdk_get_mem_stats", 00:05:01.196 "nbd_get_disks", 00:05:01.196 "nbd_stop_disk", 00:05:01.196 "nbd_start_disk", 00:05:01.196 "ublk_recover_disk", 00:05:01.196 "ublk_get_disks", 00:05:01.196 "ublk_stop_disk", 00:05:01.196 "ublk_start_disk", 00:05:01.196 "ublk_destroy_target", 00:05:01.196 "ublk_create_target", 00:05:01.196 "virtio_blk_create_transport", 00:05:01.196 "virtio_blk_get_transports", 00:05:01.196 "vhost_controller_set_coalescing", 00:05:01.196 "vhost_get_controllers", 00:05:01.196 "vhost_delete_controller", 00:05:01.196 "vhost_create_blk_controller", 00:05:01.196 "vhost_scsi_controller_remove_target", 00:05:01.196 "vhost_scsi_controller_add_target", 00:05:01.196 "vhost_start_scsi_controller", 00:05:01.196 "vhost_create_scsi_controller", 00:05:01.196 "thread_set_cpumask", 00:05:01.196 "scheduler_set_options", 00:05:01.196 "framework_get_governor", 00:05:01.196 "framework_get_scheduler", 00:05:01.196 "framework_set_scheduler", 00:05:01.196 "framework_get_reactors", 00:05:01.196 "thread_get_io_channels", 00:05:01.196 "thread_get_pollers", 00:05:01.196 "thread_get_stats", 00:05:01.196 "framework_monitor_context_switch", 00:05:01.196 "spdk_kill_instance", 00:05:01.196 "log_enable_timestamps", 00:05:01.196 "log_get_flags", 00:05:01.196 "log_clear_flag", 00:05:01.196 "log_set_flag", 00:05:01.196 "log_get_level", 00:05:01.196 "log_set_level", 00:05:01.196 "log_get_print_level", 00:05:01.196 "log_set_print_level", 00:05:01.196 "framework_enable_cpumask_locks", 00:05:01.196 "framework_disable_cpumask_locks", 00:05:01.196 "framework_wait_init", 00:05:01.196 "framework_start_init", 00:05:01.196 "scsi_get_devices", 00:05:01.196 "bdev_get_histogram", 00:05:01.196 "bdev_enable_histogram", 00:05:01.196 "bdev_set_qos_limit", 00:05:01.196 "bdev_set_qd_sampling_period", 00:05:01.196 "bdev_get_bdevs", 00:05:01.196 "bdev_reset_iostat", 00:05:01.196 "bdev_get_iostat", 00:05:01.196 "bdev_examine", 00:05:01.196 "bdev_wait_for_examine", 00:05:01.196 "bdev_set_options", 00:05:01.196 "accel_get_stats", 00:05:01.196 "accel_set_options", 00:05:01.196 "accel_set_driver", 00:05:01.197 "accel_crypto_key_destroy", 00:05:01.197 "accel_crypto_keys_get", 00:05:01.197 "accel_crypto_key_create", 00:05:01.197 "accel_assign_opc", 00:05:01.197 "accel_get_module_info", 00:05:01.197 "accel_get_opc_assignments", 00:05:01.197 "vmd_rescan", 00:05:01.197 "vmd_remove_device", 00:05:01.197 "vmd_enable", 00:05:01.197 "sock_get_default_impl", 00:05:01.197 "sock_set_default_impl", 00:05:01.197 "sock_impl_set_options", 00:05:01.197 "sock_impl_get_options", 00:05:01.197 "iobuf_get_stats", 00:05:01.197 "iobuf_set_options", 00:05:01.197 "keyring_get_keys", 00:05:01.197 "vfu_tgt_set_base_path", 00:05:01.197 "framework_get_pci_devices", 00:05:01.197 "framework_get_config", 00:05:01.197 "framework_get_subsystems", 00:05:01.197 "fsdev_set_opts", 00:05:01.197 "fsdev_get_opts", 00:05:01.197 "trace_get_info", 00:05:01.197 "trace_get_tpoint_group_mask", 00:05:01.197 "trace_disable_tpoint_group", 00:05:01.197 "trace_enable_tpoint_group", 00:05:01.197 "trace_clear_tpoint_mask", 00:05:01.197 "trace_set_tpoint_mask", 00:05:01.197 "notify_get_notifications", 00:05:01.197 "notify_get_types", 00:05:01.197 "spdk_get_version", 00:05:01.197 "rpc_get_methods" 00:05:01.197 ] 00:05:01.197 00:12:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.197 00:12:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.197 00:12:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2004318 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2004318 ']' 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2004318 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2004318 00:05:01.197 00:12:31 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.455 00:12:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.455 00:12:31 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2004318' 00:05:01.455 killing process with pid 2004318 00:05:01.455 00:12:31 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2004318 00:05:01.455 00:12:31 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2004318 00:05:03.983 00:05:03.983 real 0m4.209s 00:05:03.983 user 0m7.486s 00:05:03.983 sys 0m0.596s 00:05:03.983 00:12:34 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.983 00:12:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.983 ************************************ 00:05:03.983 END TEST spdkcli_tcp 00:05:03.983 ************************************ 00:05:03.983 00:12:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.983 00:12:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.983 00:12:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.983 00:12:34 -- common/autotest_common.sh@10 -- # set +x 00:05:03.983 ************************************ 00:05:03.983 START TEST dpdk_mem_utility 00:05:03.983 ************************************ 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.983 * Looking for test storage... 00:05:03.983 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/dpdk_memory_utility 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.983 00:12:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.983 --rc genhtml_branch_coverage=1 00:05:03.983 --rc genhtml_function_coverage=1 00:05:03.983 --rc genhtml_legend=1 00:05:03.983 --rc geninfo_all_blocks=1 00:05:03.983 --rc geninfo_unexecuted_blocks=1 00:05:03.983 00:05:03.983 ' 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.983 --rc genhtml_branch_coverage=1 00:05:03.983 --rc genhtml_function_coverage=1 00:05:03.983 --rc genhtml_legend=1 00:05:03.983 --rc geninfo_all_blocks=1 00:05:03.983 --rc geninfo_unexecuted_blocks=1 00:05:03.983 00:05:03.983 ' 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.983 --rc genhtml_branch_coverage=1 00:05:03.983 --rc genhtml_function_coverage=1 00:05:03.983 --rc genhtml_legend=1 00:05:03.983 --rc geninfo_all_blocks=1 00:05:03.983 --rc geninfo_unexecuted_blocks=1 00:05:03.983 00:05:03.983 ' 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.983 --rc genhtml_branch_coverage=1 00:05:03.983 --rc genhtml_function_coverage=1 00:05:03.983 --rc genhtml_legend=1 00:05:03.983 --rc geninfo_all_blocks=1 00:05:03.983 --rc geninfo_unexecuted_blocks=1 00:05:03.983 00:05:03.983 ' 00:05:03.983 00:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.983 00:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2005164 00:05:03.983 00:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2005164 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2005164 ']' 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.983 00:12:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.983 00:12:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.241 [2024-10-09 00:12:34.651157] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:04.241 [2024-10-09 00:12:34.651238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005164 ] 00:05:04.241 [2024-10-09 00:12:34.753773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.499 [2024-10-09 00:12:34.946417] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:05.434 00:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.434 00:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.434 { 00:05:05.434 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.434 } 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.434 00:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:05.434 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:05.434 1 heaps totaling size 866.000000 MiB 00:05:05.434 size: 866.000000 MiB heap id: 0 00:05:05.434 end heaps---------- 00:05:05.434 9 mempools totaling size 642.649841 MiB 00:05:05.434 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.434 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.434 size: 92.545471 MiB name: bdev_io_2005164 00:05:05.434 size: 51.011292 MiB name: evtpool_2005164 00:05:05.434 size: 50.003479 MiB name: msgpool_2005164 00:05:05.434 size: 36.509338 MiB name: fsdev_io_2005164 00:05:05.434 size: 21.763794 MiB name: PDU_Pool 00:05:05.434 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.434 size: 0.026123 MiB name: Session_Pool 00:05:05.434 end mempools------- 00:05:05.434 6 memzones totaling size 4.142822 MiB 00:05:05.434 size: 1.000366 MiB name: RG_ring_0_2005164 00:05:05.434 size: 1.000366 MiB name: RG_ring_1_2005164 00:05:05.434 size: 1.000366 MiB name: RG_ring_4_2005164 00:05:05.434 size: 1.000366 MiB name: RG_ring_5_2005164 00:05:05.434 size: 0.125366 MiB name: RG_ring_2_2005164 00:05:05.434 size: 0.015991 MiB name: RG_ring_3_2005164 00:05:05.434 end memzones------- 00:05:05.434 00:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.434 heap id: 0 total size: 866.000000 MiB number of busy elements: 44 number of free elements: 20 00:05:05.434 list of free elements. size: 19.979797 MiB 00:05:05.434 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:05.434 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:05.434 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:05.434 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:05.434 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:05.434 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:05.434 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:05.434 element at address: 0x20001c400000 with size: 0.999329 MiB 00:05:05.434 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:05.434 element at address: 0x20001bc00000 with size: 0.959900 MiB 00:05:05.434 element at address: 0x20001c700040 with size: 0.937256 MiB 00:05:05.434 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:05.434 element at address: 0x20001de00000 with size: 0.583191 MiB 00:05:05.434 element at address: 0x200003e00000 with size: 0.495300 MiB 00:05:05.434 element at address: 0x20001c000000 with size: 0.491150 MiB 00:05:05.434 element at address: 0x20001c800000 with size: 0.485657 MiB 00:05:05.434 element at address: 0x200015e00000 with size: 0.446167 MiB 00:05:05.434 element at address: 0x20002b200000 with size: 0.411072 MiB 00:05:05.434 element at address: 0x200003a00000 with size: 0.355286 MiB 00:05:05.434 element at address: 0x20000d7ff040 with size: 0.001038 MiB 00:05:05.434 list of standard malloc elements. size: 199.221497 MiB 00:05:05.434 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:05.434 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:05.434 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:05.434 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:05.434 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:05.434 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:05.434 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:05.434 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:05.434 element at address: 0x200015dff040 with size: 0.000427 MiB 00:05:05.434 element at address: 0x200015dffa00 with size: 0.000366 MiB 00:05:05.434 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:05.434 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:05.434 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:05.434 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:05.434 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200003aff800 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200003efef00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ff480 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ff580 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ff680 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ff780 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ff880 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ff980 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff200 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff300 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff400 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff500 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff600 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff700 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff800 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dff900 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:05.434 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:05.434 list of memzone associated elements. size: 646.798706 MiB 00:05:05.434 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:05.434 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.434 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:05.434 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.434 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:05.434 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2005164_0 00:05:05.434 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:05.434 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2005164_0 00:05:05.434 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:05.434 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2005164_0 00:05:05.434 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:05.434 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2005164_0 00:05:05.434 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:05.434 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.434 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:05.434 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.434 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:05.434 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2005164 00:05:05.434 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:05.434 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2005164 00:05:05.434 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:05.434 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2005164 00:05:05.434 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:05.434 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.434 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:05.434 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.434 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:05.434 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.434 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:05.434 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.434 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:05.434 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2005164 00:05:05.434 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:05.434 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2005164 00:05:05.434 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:05.434 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2005164 00:05:05.434 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:05.434 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2005164 00:05:05.434 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:05:05.434 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2005164 00:05:05.434 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:05:05.434 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2005164 00:05:05.434 element at address: 0x20001c07dbc0 with size: 0.500549 MiB 00:05:05.434 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.434 element at address: 0x200015e72380 with size: 0.500549 MiB 00:05:05.434 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.434 element at address: 0x20001c87c540 with size: 0.250549 MiB 00:05:05.434 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.434 element at address: 0x200003a5f180 with size: 0.125549 MiB 00:05:05.434 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2005164 00:05:05.434 element at address: 0x20001bcf5bc0 with size: 0.031799 MiB 00:05:05.434 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.434 element at address: 0x20002b2693c0 with size: 0.023804 MiB 00:05:05.434 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.434 element at address: 0x200003a5af40 with size: 0.016174 MiB 00:05:05.434 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2005164 00:05:05.434 element at address: 0x20002b26f540 with size: 0.002502 MiB 00:05:05.434 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.434 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:05.434 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2005164 00:05:05.434 element at address: 0x200003aff900 with size: 0.000366 MiB 00:05:05.434 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2005164 00:05:05.434 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:05.434 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2005164 00:05:05.434 element at address: 0x20000d7ffa80 with size: 0.000366 MiB 00:05:05.434 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.434 00:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.434 00:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2005164 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2005164 ']' 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2005164 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:05.434 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.435 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2005164 00:05:05.435 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.435 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.435 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2005164' 00:05:05.435 killing process with pid 2005164 00:05:05.435 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2005164 00:05:05.435 00:12:35 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2005164 00:05:07.963 00:05:07.963 real 0m3.924s 00:05:07.963 user 0m3.866s 00:05:07.963 sys 0m0.532s 00:05:07.963 00:12:38 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.963 00:12:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.963 ************************************ 00:05:07.963 END TEST dpdk_mem_utility 00:05:07.963 ************************************ 00:05:07.963 00:12:38 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh 00:05:07.963 00:12:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.963 00:12:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.963 00:12:38 -- common/autotest_common.sh@10 -- # set +x 00:05:07.963 ************************************ 00:05:07.963 START TEST event 00:05:07.963 ************************************ 00:05:07.963 00:12:38 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event.sh 00:05:07.963 * Looking for test storage... 00:05:07.963 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event 00:05:07.963 00:12:38 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:07.963 00:12:38 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:07.963 00:12:38 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.963 00:12:38 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.963 00:12:38 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.963 00:12:38 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.963 00:12:38 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.963 00:12:38 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.963 00:12:38 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.963 00:12:38 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.963 00:12:38 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.963 00:12:38 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.963 00:12:38 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.963 00:12:38 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.963 00:12:38 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.963 00:12:38 event -- scripts/common.sh@344 -- # case "$op" in 00:05:07.963 00:12:38 event -- scripts/common.sh@345 -- # : 1 00:05:07.963 00:12:38 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.963 00:12:38 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.963 00:12:38 event -- scripts/common.sh@365 -- # decimal 1 00:05:07.963 00:12:38 event -- scripts/common.sh@353 -- # local d=1 00:05:07.963 00:12:38 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.963 00:12:38 event -- scripts/common.sh@355 -- # echo 1 00:05:07.964 00:12:38 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.964 00:12:38 event -- scripts/common.sh@366 -- # decimal 2 00:05:07.964 00:12:38 event -- scripts/common.sh@353 -- # local d=2 00:05:07.964 00:12:38 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.964 00:12:38 event -- scripts/common.sh@355 -- # echo 2 00:05:07.964 00:12:38 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.964 00:12:38 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.964 00:12:38 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.964 00:12:38 event -- scripts/common.sh@368 -- # return 0 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.964 --rc genhtml_branch_coverage=1 00:05:07.964 --rc genhtml_function_coverage=1 00:05:07.964 --rc genhtml_legend=1 00:05:07.964 --rc geninfo_all_blocks=1 00:05:07.964 --rc geninfo_unexecuted_blocks=1 00:05:07.964 00:05:07.964 ' 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.964 --rc genhtml_branch_coverage=1 00:05:07.964 --rc genhtml_function_coverage=1 00:05:07.964 --rc genhtml_legend=1 00:05:07.964 --rc geninfo_all_blocks=1 00:05:07.964 --rc geninfo_unexecuted_blocks=1 00:05:07.964 00:05:07.964 ' 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.964 --rc genhtml_branch_coverage=1 00:05:07.964 --rc genhtml_function_coverage=1 00:05:07.964 --rc genhtml_legend=1 00:05:07.964 --rc geninfo_all_blocks=1 00:05:07.964 --rc geninfo_unexecuted_blocks=1 00:05:07.964 00:05:07.964 ' 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.964 --rc genhtml_branch_coverage=1 00:05:07.964 --rc genhtml_function_coverage=1 00:05:07.964 --rc genhtml_legend=1 00:05:07.964 --rc geninfo_all_blocks=1 00:05:07.964 --rc geninfo_unexecuted_blocks=1 00:05:07.964 00:05:07.964 ' 00:05:07.964 00:12:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:07.964 00:12:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:07.964 00:12:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:07.964 00:12:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.964 00:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.222 ************************************ 00:05:08.222 START TEST event_perf 00:05:08.222 ************************************ 00:05:08.222 00:12:38 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:08.222 Running I/O for 1 seconds...[2024-10-09 00:12:38.655386] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:08.222 [2024-10-09 00:12:38.655471] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005905 ] 00:05:08.222 [2024-10-09 00:12:38.757456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.479 [2024-10-09 00:12:38.953846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.479 [2024-10-09 00:12:38.953861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.479 [2024-10-09 00:12:38.953974] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.479 [2024-10-09 00:12:38.953976] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.852 Running I/O for 1 seconds... 00:05:09.852 lcore 0: 198092 00:05:09.852 lcore 1: 198091 00:05:09.852 lcore 2: 198089 00:05:09.852 lcore 3: 198090 00:05:09.852 done. 00:05:09.852 00:05:09.852 real 0m1.734s 00:05:09.852 user 0m4.583s 00:05:09.852 sys 0m0.144s 00:05:09.852 00:12:40 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.852 00:12:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.852 ************************************ 00:05:09.852 END TEST event_perf 00:05:09.852 ************************************ 00:05:09.852 00:12:40 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.852 00:12:40 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:09.852 00:12:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.852 00:12:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.852 ************************************ 00:05:09.852 START TEST event_reactor 00:05:09.852 ************************************ 00:05:09.852 00:12:40 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:09.852 [2024-10-09 00:12:40.457366] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:09.852 [2024-10-09 00:12:40.457457] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006161 ] 00:05:10.110 [2024-10-09 00:12:40.561910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.368 [2024-10-09 00:12:40.759373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.741 test_start 00:05:11.741 oneshot 00:05:11.741 tick 100 00:05:11.741 tick 100 00:05:11.741 tick 250 00:05:11.741 tick 100 00:05:11.741 tick 100 00:05:11.741 tick 100 00:05:11.741 tick 250 00:05:11.741 tick 500 00:05:11.741 tick 100 00:05:11.741 tick 100 00:05:11.741 tick 250 00:05:11.741 tick 100 00:05:11.741 tick 100 00:05:11.741 test_end 00:05:11.741 00:05:11.741 real 0m1.710s 00:05:11.741 user 0m1.573s 00:05:11.741 sys 0m0.129s 00:05:11.741 00:12:42 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.741 00:12:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:11.741 ************************************ 00:05:11.741 END TEST event_reactor 00:05:11.741 ************************************ 00:05:11.741 00:12:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.741 00:12:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:11.741 00:12:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.741 00:12:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.741 ************************************ 00:05:11.741 START TEST event_reactor_perf 00:05:11.741 ************************************ 00:05:11.741 00:12:42 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:11.741 [2024-10-09 00:12:42.234916] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:11.741 [2024-10-09 00:12:42.234992] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006407 ] 00:05:11.741 [2024-10-09 00:12:42.338805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.001 [2024-10-09 00:12:42.532903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.375 test_start 00:05:13.375 test_end 00:05:13.375 Performance: 402470 events per second 00:05:13.375 00:05:13.375 real 0m1.713s 00:05:13.375 user 0m1.565s 00:05:13.375 sys 0m0.140s 00:05:13.375 00:12:43 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.375 00:12:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.375 ************************************ 00:05:13.375 END TEST event_reactor_perf 00:05:13.375 ************************************ 00:05:13.375 00:12:43 event -- event/event.sh@49 -- # uname -s 00:05:13.375 00:12:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:13.375 00:12:43 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:13.375 00:12:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.375 00:12:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.375 00:12:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.375 ************************************ 00:05:13.375 START TEST event_scheduler 00:05:13.375 ************************************ 00:05:13.375 00:12:43 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:13.634 * Looking for test storage... 00:05:13.634 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.634 00:12:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.634 --rc genhtml_branch_coverage=1 00:05:13.634 --rc genhtml_function_coverage=1 00:05:13.634 --rc genhtml_legend=1 00:05:13.634 --rc geninfo_all_blocks=1 00:05:13.634 --rc geninfo_unexecuted_blocks=1 00:05:13.634 00:05:13.634 ' 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.634 --rc genhtml_branch_coverage=1 00:05:13.634 --rc genhtml_function_coverage=1 00:05:13.634 --rc genhtml_legend=1 00:05:13.634 --rc geninfo_all_blocks=1 00:05:13.634 --rc geninfo_unexecuted_blocks=1 00:05:13.634 00:05:13.634 ' 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.634 --rc genhtml_branch_coverage=1 00:05:13.634 --rc genhtml_function_coverage=1 00:05:13.634 --rc genhtml_legend=1 00:05:13.634 --rc geninfo_all_blocks=1 00:05:13.634 --rc geninfo_unexecuted_blocks=1 00:05:13.634 00:05:13.634 ' 00:05:13.634 00:12:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.634 --rc genhtml_branch_coverage=1 00:05:13.634 --rc genhtml_function_coverage=1 00:05:13.634 --rc genhtml_legend=1 00:05:13.634 --rc geninfo_all_blocks=1 00:05:13.634 --rc geninfo_unexecuted_blocks=1 00:05:13.634 00:05:13.634 ' 00:05:13.634 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:13.634 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2006903 00:05:13.635 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.635 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2006903 00:05:13.635 00:12:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2006903 ']' 00:05:13.635 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:13.635 00:12:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.635 00:12:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.635 00:12:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.635 00:12:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.635 00:12:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.635 [2024-10-09 00:12:44.189160] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:13.635 [2024-10-09 00:12:44.189275] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006903 ] 00:05:13.892 [2024-10-09 00:12:44.288105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.892 [2024-10-09 00:12:44.486618] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.892 [2024-10-09 00:12:44.486687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.892 [2024-10-09 00:12:44.486744] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.892 [2024-10-09 00:12:44.486758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:14.456 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.456 [2024-10-09 00:12:44.988878] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:14.456 [2024-10-09 00:12:44.988905] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:14.456 [2024-10-09 00:12:44.988929] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:14.456 [2024-10-09 00:12:44.988938] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:14.456 [2024-10-09 00:12:44.988947] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.456 00:12:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.456 00:12:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.714 [2024-10-09 00:12:45.334395] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:14.714 00:12:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.714 00:12:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:14.714 00:12:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.714 00:12:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.714 00:12:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.972 ************************************ 00:05:14.972 START TEST scheduler_create_thread 00:05:14.972 ************************************ 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.972 2 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.972 3 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.972 4 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:14.972 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 5 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 6 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 7 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 8 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 9 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.973 10 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.973 00:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.911 00:12:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.911 00:12:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:15.911 00:12:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:15.911 00:12:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.911 00:12:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.850 00:12:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:16.850 00:12:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:16.850 00:12:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:16.850 00:12:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.784 00:12:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.784 00:12:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:17.784 00:12:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:17.784 00:12:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.784 00:12:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.348 00:12:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.348 00:05:18.348 real 0m3.564s 00:05:18.348 user 0m0.021s 00:05:18.348 sys 0m0.008s 00:05:18.348 00:12:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.348 00:12:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.348 ************************************ 00:05:18.348 END TEST scheduler_create_thread 00:05:18.348 ************************************ 00:05:18.348 00:12:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:18.348 00:12:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2006903 00:05:18.348 00:12:48 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2006903 ']' 00:05:18.348 00:12:48 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2006903 00:05:18.348 00:12:48 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:18.348 00:12:48 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.348 00:12:48 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2006903 00:05:18.606 00:12:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:18.606 00:12:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:18.606 00:12:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2006903' 00:05:18.606 killing process with pid 2006903 00:05:18.606 00:12:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2006903 00:05:18.606 00:12:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2006903 00:05:18.864 [2024-10-09 00:12:49.313876] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.237 00:05:20.237 real 0m6.692s 00:05:20.237 user 0m12.590s 00:05:20.237 sys 0m0.481s 00:05:20.237 00:12:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.237 00:12:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.238 ************************************ 00:05:20.238 END TEST event_scheduler 00:05:20.238 ************************************ 00:05:20.238 00:12:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.238 00:12:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.238 00:12:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.238 00:12:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.238 00:12:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.238 ************************************ 00:05:20.238 START TEST app_repeat 00:05:20.238 ************************************ 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2008072 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2008072' 00:05:20.238 Process app_repeat pid: 2008072 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.238 spdk_app_start Round 0 00:05:20.238 00:12:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2008072 /var/tmp/spdk-nbd.sock 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2008072 ']' 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.238 00:12:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.238 [2024-10-09 00:12:50.781033] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:20.238 [2024-10-09 00:12:50.781215] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008072 ] 00:05:20.496 [2024-10-09 00:12:50.886455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.496 [2024-10-09 00:12:51.081326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.496 [2024-10-09 00:12:51.081338] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.062 00:12:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.062 00:12:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:21.062 00:12:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.320 Malloc0 00:05:21.320 00:12:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.578 Malloc1 00:05:21.578 00:12:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.578 00:12:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.836 /dev/nbd0 00:05:21.836 00:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.836 00:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.836 1+0 records in 00:05:21.836 1+0 records out 00:05:21.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019551 s, 21.0 MB/s 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:21.836 00:12:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:21.836 00:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.836 00:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.836 00:12:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.093 /dev/nbd1 00:05:22.093 00:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.094 00:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.094 1+0 records in 00:05:22.094 1+0 records out 00:05:22.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209813 s, 19.5 MB/s 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.094 00:12:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.094 00:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.094 00:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.094 00:12:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.094 00:12:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.094 00:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.352 { 00:05:22.352 "nbd_device": "/dev/nbd0", 00:05:22.352 "bdev_name": "Malloc0" 00:05:22.352 }, 00:05:22.352 { 00:05:22.352 "nbd_device": "/dev/nbd1", 00:05:22.352 "bdev_name": "Malloc1" 00:05:22.352 } 00:05:22.352 ]' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.352 { 00:05:22.352 "nbd_device": "/dev/nbd0", 00:05:22.352 "bdev_name": "Malloc0" 00:05:22.352 }, 00:05:22.352 { 00:05:22.352 "nbd_device": "/dev/nbd1", 00:05:22.352 "bdev_name": "Malloc1" 00:05:22.352 } 00:05:22.352 ]' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.352 /dev/nbd1' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.352 /dev/nbd1' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.352 256+0 records in 00:05:22.352 256+0 records out 00:05:22.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102737 s, 102 MB/s 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.352 256+0 records in 00:05:22.352 256+0 records out 00:05:22.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156111 s, 67.2 MB/s 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.352 256+0 records in 00:05:22.352 256+0 records out 00:05:22.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187719 s, 55.9 MB/s 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.352 00:12:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.609 00:12:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.867 00:12:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.125 00:12:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.125 00:12:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.382 00:12:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.756 [2024-10-09 00:12:55.276177] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.014 [2024-10-09 00:12:55.461474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.014 [2024-10-09 00:12:55.461474] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.271 [2024-10-09 00:12:55.655155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.271 [2024-10-09 00:12:55.655200] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.640 00:12:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.640 00:12:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.640 spdk_app_start Round 1 00:05:26.640 00:12:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2008072 /var/tmp/spdk-nbd.sock 00:05:26.640 00:12:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2008072 ']' 00:05:26.640 00:12:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.640 00:12:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.640 00:12:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.640 00:12:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.640 00:12:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.640 00:12:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.640 00:12:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:26.640 00:12:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.897 Malloc0 00:05:26.897 00:12:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.154 Malloc1 00:05:27.154 00:12:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.154 00:12:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.412 /dev/nbd0 00:05:27.412 00:12:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.412 00:12:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.412 1+0 records in 00:05:27.412 1+0 records out 00:05:27.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157049 s, 26.1 MB/s 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.412 00:12:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.412 00:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.412 00:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.412 00:12:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.670 /dev/nbd1 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.670 1+0 records in 00:05:27.670 1+0 records out 00:05:27.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193323 s, 21.2 MB/s 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:27.670 00:12:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.670 00:12:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.928 { 00:05:27.928 "nbd_device": "/dev/nbd0", 00:05:27.928 "bdev_name": "Malloc0" 00:05:27.928 }, 00:05:27.928 { 00:05:27.928 "nbd_device": "/dev/nbd1", 00:05:27.928 "bdev_name": "Malloc1" 00:05:27.928 } 00:05:27.928 ]' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.928 { 00:05:27.928 "nbd_device": "/dev/nbd0", 00:05:27.928 "bdev_name": "Malloc0" 00:05:27.928 }, 00:05:27.928 { 00:05:27.928 "nbd_device": "/dev/nbd1", 00:05:27.928 "bdev_name": "Malloc1" 00:05:27.928 } 00:05:27.928 ]' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.928 /dev/nbd1' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.928 /dev/nbd1' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.928 256+0 records in 00:05:27.928 256+0 records out 00:05:27.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105005 s, 99.9 MB/s 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.928 256+0 records in 00:05:27.928 256+0 records out 00:05:27.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155091 s, 67.6 MB/s 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.928 256+0 records in 00:05:27.928 256+0 records out 00:05:27.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183815 s, 57.0 MB/s 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.928 00:12:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.186 00:12:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.449 00:12:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.450 00:12:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.450 00:12:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.450 00:12:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.450 00:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.450 00:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.711 00:12:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.711 00:12:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.972 00:12:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.415 [2024-10-09 00:13:00.813899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.415 [2024-10-09 00:13:00.995772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.415 [2024-10-09 00:13:00.995780] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.672 [2024-10-09 00:13:01.187416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.672 [2024-10-09 00:13:01.187460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.043 00:13:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.043 00:13:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.043 spdk_app_start Round 2 00:05:32.043 00:13:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2008072 /var/tmp/spdk-nbd.sock 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2008072 ']' 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.043 00:13:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.043 00:13:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.300 Malloc0 00:05:32.300 00:13:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.558 Malloc1 00:05:32.558 00:13:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.558 00:13:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.816 /dev/nbd0 00:05:32.816 00:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.816 00:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.816 1+0 records in 00:05:32.816 1+0 records out 00:05:32.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022365 s, 18.3 MB/s 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:32.816 00:13:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:32.816 00:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.816 00:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.816 00:13:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.074 /dev/nbd1 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.074 1+0 records in 00:05:33.074 1+0 records out 00:05:33.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211651 s, 19.4 MB/s 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdtest 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.074 00:13:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.074 00:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.332 { 00:05:33.332 "nbd_device": "/dev/nbd0", 00:05:33.332 "bdev_name": "Malloc0" 00:05:33.332 }, 00:05:33.332 { 00:05:33.332 "nbd_device": "/dev/nbd1", 00:05:33.332 "bdev_name": "Malloc1" 00:05:33.332 } 00:05:33.332 ]' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.332 { 00:05:33.332 "nbd_device": "/dev/nbd0", 00:05:33.332 "bdev_name": "Malloc0" 00:05:33.332 }, 00:05:33.332 { 00:05:33.332 "nbd_device": "/dev/nbd1", 00:05:33.332 "bdev_name": "Malloc1" 00:05:33.332 } 00:05:33.332 ]' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.332 /dev/nbd1' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.332 /dev/nbd1' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.332 256+0 records in 00:05:33.332 256+0 records out 00:05:33.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101288 s, 104 MB/s 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.332 256+0 records in 00:05:33.332 256+0 records out 00:05:33.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015192 s, 69.0 MB/s 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.332 256+0 records in 00:05:33.332 256+0 records out 00:05:33.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174672 s, 60.0 MB/s 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.332 00:13:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.590 00:13:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.847 00:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.848 00:13:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.119 00:13:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.120 00:13:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.120 00:13:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.382 00:13:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.755 [2024-10-09 00:13:06.284249] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.013 [2024-10-09 00:13:06.469054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.013 [2024-10-09 00:13:06.469054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.271 [2024-10-09 00:13:06.659234] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.271 [2024-10-09 00:13:06.659277] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.648 00:13:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2008072 /var/tmp/spdk-nbd.sock 00:05:37.648 00:13:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2008072 ']' 00:05:37.648 00:13:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.648 00:13:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.648 00:13:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.648 00:13:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.648 00:13:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.648 00:13:08 event.app_repeat -- event/event.sh@39 -- # killprocess 2008072 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2008072 ']' 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2008072 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2008072 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2008072' 00:05:37.648 killing process with pid 2008072 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2008072 00:05:37.648 00:13:08 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2008072 00:05:39.023 spdk_app_start is called in Round 0. 00:05:39.023 Shutdown signal received, stop current app iteration 00:05:39.023 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:05:39.023 spdk_app_start is called in Round 1. 00:05:39.023 Shutdown signal received, stop current app iteration 00:05:39.023 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:05:39.023 spdk_app_start is called in Round 2. 00:05:39.023 Shutdown signal received, stop current app iteration 00:05:39.023 Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 reinitialization... 00:05:39.023 spdk_app_start is called in Round 3. 00:05:39.023 Shutdown signal received, stop current app iteration 00:05:39.023 00:13:09 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:39.023 00:13:09 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:39.023 00:05:39.023 real 0m18.652s 00:05:39.023 user 0m38.523s 00:05:39.023 sys 0m2.565s 00:05:39.023 00:13:09 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.023 00:13:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.023 ************************************ 00:05:39.023 END TEST app_repeat 00:05:39.023 ************************************ 00:05:39.023 00:13:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:39.023 00:13:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:39.023 00:13:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.023 00:13:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.023 00:13:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.023 ************************************ 00:05:39.023 START TEST cpu_locks 00:05:39.023 ************************************ 00:05:39.023 00:13:09 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:39.023 * Looking for test storage... 00:05:39.023 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event 00:05:39.023 00:13:09 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.023 00:13:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.023 00:13:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.023 00:13:09 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.023 00:13:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.024 --rc genhtml_branch_coverage=1 00:05:39.024 --rc genhtml_function_coverage=1 00:05:39.024 --rc genhtml_legend=1 00:05:39.024 --rc geninfo_all_blocks=1 00:05:39.024 --rc geninfo_unexecuted_blocks=1 00:05:39.024 00:05:39.024 ' 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.024 --rc genhtml_branch_coverage=1 00:05:39.024 --rc genhtml_function_coverage=1 00:05:39.024 --rc genhtml_legend=1 00:05:39.024 --rc geninfo_all_blocks=1 00:05:39.024 --rc geninfo_unexecuted_blocks=1 00:05:39.024 00:05:39.024 ' 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.024 --rc genhtml_branch_coverage=1 00:05:39.024 --rc genhtml_function_coverage=1 00:05:39.024 --rc genhtml_legend=1 00:05:39.024 --rc geninfo_all_blocks=1 00:05:39.024 --rc geninfo_unexecuted_blocks=1 00:05:39.024 00:05:39.024 ' 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.024 --rc genhtml_branch_coverage=1 00:05:39.024 --rc genhtml_function_coverage=1 00:05:39.024 --rc genhtml_legend=1 00:05:39.024 --rc geninfo_all_blocks=1 00:05:39.024 --rc geninfo_unexecuted_blocks=1 00:05:39.024 00:05:39.024 ' 00:05:39.024 00:13:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:39.024 00:13:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:39.024 00:13:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:39.024 00:13:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.024 00:13:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.024 ************************************ 00:05:39.024 START TEST default_locks 00:05:39.024 ************************************ 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2011447 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2011447 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2011447 ']' 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.024 00:13:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.283 [2024-10-09 00:13:09.729021] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:39.283 [2024-10-09 00:13:09.729117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011447 ] 00:05:39.283 [2024-10-09 00:13:09.834224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.540 [2024-10-09 00:13:10.034821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.474 00:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.474 00:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:40.474 00:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2011447 00:05:40.474 00:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2011447 00:05:40.474 00:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.474 lslocks: write error 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2011447 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2011447 ']' 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2011447 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2011447 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2011447' 00:05:40.474 killing process with pid 2011447 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2011447 00:05:40.474 00:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2011447 00:05:43.003 00:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2011447 00:05:43.003 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2011447 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2011447 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2011447 ']' 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.004 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2011447) - No such process 00:05:43.004 ERROR: process (pid: 2011447) is no longer running 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.004 00:05:43.004 real 0m3.880s 00:05:43.004 user 0m3.854s 00:05:43.004 sys 0m0.547s 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.004 00:13:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.004 ************************************ 00:05:43.004 END TEST default_locks 00:05:43.004 ************************************ 00:05:43.004 00:13:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.004 00:13:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.004 00:13:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.004 00:13:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.004 ************************************ 00:05:43.004 START TEST default_locks_via_rpc 00:05:43.004 ************************************ 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2012147 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2012147 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2012147 ']' 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.004 00:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.261 [2024-10-09 00:13:13.693928] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:43.261 [2024-10-09 00:13:13.694014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012147 ] 00:05:43.261 [2024-10-09 00:13:13.797516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.518 [2024-10-09 00:13:13.990460] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2012147 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2012147 00:05:44.453 00:13:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2012147 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2012147 ']' 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2012147 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2012147 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2012147' 00:05:44.711 killing process with pid 2012147 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2012147 00:05:44.711 00:13:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2012147 00:05:47.236 00:05:47.236 real 0m4.028s 00:05:47.236 user 0m3.981s 00:05:47.236 sys 0m0.656s 00:05:47.236 00:13:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.236 00:13:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.236 ************************************ 00:05:47.236 END TEST default_locks_via_rpc 00:05:47.236 ************************************ 00:05:47.236 00:13:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.236 00:13:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.236 00:13:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.236 00:13:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.236 ************************************ 00:05:47.236 START TEST non_locking_app_on_locked_coremask 00:05:47.236 ************************************ 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2012847 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2012847 /var/tmp/spdk.sock 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2012847 ']' 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.236 00:13:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.236 [2024-10-09 00:13:17.780790] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:47.236 [2024-10-09 00:13:17.780877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012847 ] 00:05:47.494 [2024-10-09 00:13:17.886439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.494 [2024-10-09 00:13:18.081106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2013019 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2013019 /var/tmp/spdk2.sock 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2013019 ']' 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.430 00:13:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.430 [2024-10-09 00:13:18.953877] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:48.430 [2024-10-09 00:13:18.953984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013019 ] 00:05:48.688 [2024-10-09 00:13:19.094959] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.688 [2024-10-09 00:13:19.095011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.945 [2024-10-09 00:13:19.478077] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.848 00:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.848 00:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.848 00:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2012847 00:05:50.848 00:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2012847 00:05:50.848 00:13:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.781 lslocks: write error 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2012847 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2012847 ']' 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2012847 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2012847 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2012847' 00:05:51.782 killing process with pid 2012847 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2012847 00:05:51.782 00:13:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2012847 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2013019 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2013019 ']' 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2013019 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2013019 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2013019' 00:05:57.051 killing process with pid 2013019 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2013019 00:05:57.051 00:13:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2013019 00:05:58.963 00:05:58.963 real 0m11.876s 00:05:58.963 user 0m12.095s 00:05:58.963 sys 0m1.313s 00:05:58.963 00:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.963 00:13:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.963 ************************************ 00:05:58.963 END TEST non_locking_app_on_locked_coremask 00:05:58.963 ************************************ 00:05:59.221 00:13:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.222 00:13:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.222 00:13:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.222 00:13:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.222 ************************************ 00:05:59.222 START TEST locking_app_on_unlocked_coremask 00:05:59.222 ************************************ 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2014802 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2014802 /var/tmp/spdk.sock 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2014802 ']' 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.222 00:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.222 [2024-10-09 00:13:29.715183] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:05:59.222 [2024-10-09 00:13:29.715271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014802 ] 00:05:59.222 [2024-10-09 00:13:29.819226] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.222 [2024-10-09 00:13:29.819268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.480 [2024-10-09 00:13:30.017644] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2014931 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2014931 /var/tmp/spdk2.sock 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2014931 ']' 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.413 00:13:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.413 [2024-10-09 00:13:30.929074] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:00.413 [2024-10-09 00:13:30.929167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014931 ] 00:06:00.671 [2024-10-09 00:13:31.069557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.930 [2024-10-09 00:13:31.459740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.840 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.840 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:02.840 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2014931 00:06:02.840 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2014931 00:06:02.840 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.406 lslocks: write error 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2014802 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2014802 ']' 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2014802 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2014802 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2014802' 00:06:03.406 killing process with pid 2014802 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2014802 00:06:03.406 00:13:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2014802 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2014931 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2014931 ']' 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2014931 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2014931 00:06:08.713 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.714 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.714 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2014931' 00:06:08.714 killing process with pid 2014931 00:06:08.714 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2014931 00:06:08.714 00:13:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2014931 00:06:11.292 00:06:11.292 real 0m11.698s 00:06:11.292 user 0m11.938s 00:06:11.292 sys 0m1.236s 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 END TEST locking_app_on_unlocked_coremask 00:06:11.292 ************************************ 00:06:11.292 00:13:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.292 00:13:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.292 00:13:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.292 00:13:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 ************************************ 00:06:11.292 START TEST locking_app_on_locked_coremask 00:06:11.292 ************************************ 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2016746 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2016746 /var/tmp/spdk.sock 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2016746 ']' 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.292 00:13:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.292 [2024-10-09 00:13:41.479599] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:11.292 [2024-10-09 00:13:41.479687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016746 ] 00:06:11.292 [2024-10-09 00:13:41.578505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.292 [2024-10-09 00:13:41.776084] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2016972 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2016972 /var/tmp/spdk2.sock 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2016972 /var/tmp/spdk2.sock 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2016972 /var/tmp/spdk2.sock 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2016972 ']' 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.226 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.227 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.227 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.227 00:13:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.227 [2024-10-09 00:13:42.660621] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:12.227 [2024-10-09 00:13:42.660707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016972 ] 00:06:12.227 [2024-10-09 00:13:42.799231] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2016746 has claimed it. 00:06:12.227 [2024-10-09 00:13:42.799284] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.792 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2016972) - No such process 00:06:12.792 ERROR: process (pid: 2016972) is no longer running 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2016746 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2016746 00:06:12.792 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.358 lslocks: write error 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2016746 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2016746 ']' 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2016746 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2016746 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2016746' 00:06:13.358 killing process with pid 2016746 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2016746 00:06:13.358 00:13:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2016746 00:06:15.897 00:06:15.897 real 0m4.986s 00:06:15.897 user 0m5.148s 00:06:15.897 sys 0m0.889s 00:06:15.897 00:13:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.897 00:13:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.897 ************************************ 00:06:15.897 END TEST locking_app_on_locked_coremask 00:06:15.897 ************************************ 00:06:15.897 00:13:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.897 00:13:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.897 00:13:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.897 00:13:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.897 ************************************ 00:06:15.897 START TEST locking_overlapped_coremask 00:06:15.897 ************************************ 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2017677 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2017677 /var/tmp/spdk.sock 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2017677 ']' 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.897 00:13:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.897 [2024-10-09 00:13:46.525145] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:15.897 [2024-10-09 00:13:46.525249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017677 ] 00:06:16.157 [2024-10-09 00:13:46.630022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.415 [2024-10-09 00:13:46.824242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.415 [2024-10-09 00:13:46.824311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.415 [2024-10-09 00:13:46.824316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2017903 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2017903 /var/tmp/spdk2.sock 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2017903 /var/tmp/spdk2.sock 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2017903 /var/tmp/spdk2.sock 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2017903 ']' 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.351 00:13:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.351 [2024-10-09 00:13:47.770130] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:17.351 [2024-10-09 00:13:47.770216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017903 ] 00:06:17.351 [2024-10-09 00:13:47.913440] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2017677 has claimed it. 00:06:17.351 [2024-10-09 00:13:47.913498] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.917 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2017903) - No such process 00:06:17.917 ERROR: process (pid: 2017903) is no longer running 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2017677 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2017677 ']' 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2017677 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2017677 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2017677' 00:06:17.917 killing process with pid 2017677 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2017677 00:06:17.917 00:13:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2017677 00:06:20.446 00:06:20.446 real 0m4.530s 00:06:20.446 user 0m12.147s 00:06:20.446 sys 0m0.632s 00:06:20.446 00:13:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.446 00:13:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.446 ************************************ 00:06:20.446 END TEST locking_overlapped_coremask 00:06:20.446 ************************************ 00:06:20.446 00:13:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:20.446 00:13:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.446 00:13:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.446 00:13:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.446 ************************************ 00:06:20.446 START TEST locking_overlapped_coremask_via_rpc 00:06:20.446 ************************************ 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2018388 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2018388 /var/tmp/spdk.sock 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2018388 ']' 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.446 00:13:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.704 [2024-10-09 00:13:51.144408] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:20.704 [2024-10-09 00:13:51.144493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018388 ] 00:06:20.704 [2024-10-09 00:13:51.252641] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.704 [2024-10-09 00:13:51.252680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.962 [2024-10-09 00:13:51.470150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.962 [2024-10-09 00:13:51.470235] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.962 [2024-10-09 00:13:51.470256] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2018620 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2018620 /var/tmp/spdk2.sock 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2018620 ']' 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.896 00:13:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.896 [2024-10-09 00:13:52.411083] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:21.896 [2024-10-09 00:13:52.411179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018620 ] 00:06:22.153 [2024-10-09 00:13:52.556131] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.153 [2024-10-09 00:13:52.556174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.411 [2024-10-09 00:13:52.964977] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.411 [2024-10-09 00:13:52.968116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.411 [2024-10-09 00:13:52.968141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.305 [2024-10-09 00:13:54.899198] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2018388 has claimed it. 00:06:24.305 request: 00:06:24.305 { 00:06:24.305 "method": "framework_enable_cpumask_locks", 00:06:24.305 "req_id": 1 00:06:24.305 } 00:06:24.305 Got JSON-RPC error response 00:06:24.305 response: 00:06:24.305 { 00:06:24.305 "code": -32603, 00:06:24.305 "message": "Failed to claim CPU core: 2" 00:06:24.305 } 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2018388 /var/tmp/spdk.sock 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2018388 ']' 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.305 00:13:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2018620 /var/tmp/spdk2.sock 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2018620 ']' 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.562 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.820 00:06:24.820 real 0m4.272s 00:06:24.820 user 0m1.128s 00:06:24.820 sys 0m0.184s 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.820 00:13:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.820 ************************************ 00:06:24.820 END TEST locking_overlapped_coremask_via_rpc 00:06:24.820 ************************************ 00:06:24.820 00:13:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.820 00:13:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2018388 ]] 00:06:24.820 00:13:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2018388 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2018388 ']' 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2018388 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2018388 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2018388' 00:06:24.820 killing process with pid 2018388 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2018388 00:06:24.820 00:13:55 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2018388 00:06:28.106 00:13:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2018620 ]] 00:06:28.106 00:13:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2018620 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2018620 ']' 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2018620 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2018620 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2018620' 00:06:28.106 killing process with pid 2018620 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2018620 00:06:28.106 00:13:58 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2018620 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2018388 ]] 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2018388 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2018388 ']' 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2018388 00:06:30.630 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2018388) - No such process 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2018388 is not found' 00:06:30.630 Process with pid 2018388 is not found 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2018620 ]] 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2018620 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2018620 ']' 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2018620 00:06:30.630 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2018620) - No such process 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2018620 is not found' 00:06:30.630 Process with pid 2018620 is not found 00:06:30.630 00:14:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.630 00:06:30.630 real 0m51.216s 00:06:30.630 user 1m26.723s 00:06:30.630 sys 0m6.694s 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.630 00:14:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.630 ************************************ 00:06:30.630 END TEST cpu_locks 00:06:30.630 ************************************ 00:06:30.630 00:06:30.630 real 1m22.299s 00:06:30.630 user 2m25.814s 00:06:30.630 sys 0m10.516s 00:06:30.630 00:14:00 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.630 00:14:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.630 ************************************ 00:06:30.630 END TEST event 00:06:30.630 ************************************ 00:06:30.630 00:14:00 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh 00:06:30.630 00:14:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.630 00:14:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.630 00:14:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.630 ************************************ 00:06:30.630 START TEST thread 00:06:30.630 ************************************ 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/thread.sh 00:06:30.630 * Looking for test storage... 00:06:30.630 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.630 00:14:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.630 00:14:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.630 00:14:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.630 00:14:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.630 00:14:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.630 00:14:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.630 00:14:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.630 00:14:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.630 00:14:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.630 00:14:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.630 00:14:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.630 00:14:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:30.630 00:14:00 thread -- scripts/common.sh@345 -- # : 1 00:06:30.630 00:14:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.630 00:14:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.630 00:14:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:30.630 00:14:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:30.630 00:14:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.630 00:14:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:30.630 00:14:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.630 00:14:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:30.630 00:14:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:30.630 00:14:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.630 00:14:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:30.630 00:14:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.630 00:14:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.630 00:14:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.630 00:14:00 thread -- scripts/common.sh@368 -- # return 0 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.630 --rc genhtml_branch_coverage=1 00:06:30.630 --rc genhtml_function_coverage=1 00:06:30.630 --rc genhtml_legend=1 00:06:30.630 --rc geninfo_all_blocks=1 00:06:30.630 --rc geninfo_unexecuted_blocks=1 00:06:30.630 00:06:30.630 ' 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.630 --rc genhtml_branch_coverage=1 00:06:30.630 --rc genhtml_function_coverage=1 00:06:30.630 --rc genhtml_legend=1 00:06:30.630 --rc geninfo_all_blocks=1 00:06:30.630 --rc geninfo_unexecuted_blocks=1 00:06:30.630 00:06:30.630 ' 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.630 --rc genhtml_branch_coverage=1 00:06:30.630 --rc genhtml_function_coverage=1 00:06:30.630 --rc genhtml_legend=1 00:06:30.630 --rc geninfo_all_blocks=1 00:06:30.630 --rc geninfo_unexecuted_blocks=1 00:06:30.630 00:06:30.630 ' 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.630 --rc genhtml_branch_coverage=1 00:06:30.630 --rc genhtml_function_coverage=1 00:06:30.630 --rc genhtml_legend=1 00:06:30.630 --rc geninfo_all_blocks=1 00:06:30.630 --rc geninfo_unexecuted_blocks=1 00:06:30.630 00:06:30.630 ' 00:06:30.630 00:14:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.630 00:14:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.630 ************************************ 00:06:30.630 START TEST thread_poller_perf 00:06:30.630 ************************************ 00:06:30.630 00:14:00 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.630 [2024-10-09 00:14:00.985670] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:30.630 [2024-10-09 00:14:00.985761] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020283 ] 00:06:30.630 [2024-10-09 00:14:01.087346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.888 [2024-10-09 00:14:01.278070] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.888 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.271 [2024-10-08T22:14:02.906Z] ====================================== 00:06:32.271 [2024-10-08T22:14:02.906Z] busy:2109114644 (cyc) 00:06:32.271 [2024-10-08T22:14:02.906Z] total_run_count: 394000 00:06:32.271 [2024-10-08T22:14:02.906Z] tsc_hz: 2100000000 (cyc) 00:06:32.271 [2024-10-08T22:14:02.906Z] ====================================== 00:06:32.271 [2024-10-08T22:14:02.906Z] poller_cost: 5353 (cyc), 2549 (nsec) 00:06:32.271 00:06:32.271 real 0m1.714s 00:06:32.271 user 0m1.573s 00:06:32.271 sys 0m0.135s 00:06:32.271 00:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.271 00:14:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.271 ************************************ 00:06:32.271 END TEST thread_poller_perf 00:06:32.271 ************************************ 00:06:32.271 00:14:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.271 00:14:02 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:32.271 00:14:02 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.271 00:14:02 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.271 ************************************ 00:06:32.271 START TEST thread_poller_perf 00:06:32.271 ************************************ 00:06:32.271 00:14:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.271 [2024-10-09 00:14:02.765562] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:32.271 [2024-10-09 00:14:02.765641] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020535 ] 00:06:32.271 [2024-10-09 00:14:02.868137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.529 [2024-10-09 00:14:03.053527] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.529 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.914 [2024-10-08T22:14:04.549Z] ====================================== 00:06:33.914 [2024-10-08T22:14:04.549Z] busy:2102336986 (cyc) 00:06:33.914 [2024-10-08T22:14:04.549Z] total_run_count: 5174000 00:06:33.914 [2024-10-08T22:14:04.549Z] tsc_hz: 2100000000 (cyc) 00:06:33.914 [2024-10-08T22:14:04.549Z] ====================================== 00:06:33.914 [2024-10-08T22:14:04.549Z] poller_cost: 406 (cyc), 193 (nsec) 00:06:33.914 00:06:33.914 real 0m1.698s 00:06:33.914 user 0m1.558s 00:06:33.914 sys 0m0.135s 00:06:33.915 00:14:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.915 00:14:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.915 ************************************ 00:06:33.915 END TEST thread_poller_perf 00:06:33.915 ************************************ 00:06:33.915 00:14:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.915 00:06:33.915 real 0m3.697s 00:06:33.915 user 0m3.264s 00:06:33.915 sys 0m0.444s 00:06:33.915 00:14:04 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.915 00:14:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.915 ************************************ 00:06:33.915 END TEST thread 00:06:33.915 ************************************ 00:06:33.915 00:14:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:33.915 00:14:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh 00:06:33.915 00:14:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.915 00:14:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.915 00:14:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.915 ************************************ 00:06:33.915 START TEST app_cmdline 00:06:33.915 ************************************ 00:06:33.915 00:14:04 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.175 * Looking for test storage... 00:06:34.175 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.175 00:14:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.175 --rc genhtml_branch_coverage=1 00:06:34.175 --rc genhtml_function_coverage=1 00:06:34.175 --rc genhtml_legend=1 00:06:34.175 --rc geninfo_all_blocks=1 00:06:34.175 --rc geninfo_unexecuted_blocks=1 00:06:34.175 00:06:34.175 ' 00:06:34.175 00:14:04 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.175 --rc genhtml_branch_coverage=1 00:06:34.175 --rc genhtml_function_coverage=1 00:06:34.175 --rc genhtml_legend=1 00:06:34.175 --rc geninfo_all_blocks=1 00:06:34.175 --rc geninfo_unexecuted_blocks=1 00:06:34.175 00:06:34.175 ' 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.176 --rc genhtml_branch_coverage=1 00:06:34.176 --rc genhtml_function_coverage=1 00:06:34.176 --rc genhtml_legend=1 00:06:34.176 --rc geninfo_all_blocks=1 00:06:34.176 --rc geninfo_unexecuted_blocks=1 00:06:34.176 00:06:34.176 ' 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.176 --rc genhtml_branch_coverage=1 00:06:34.176 --rc genhtml_function_coverage=1 00:06:34.176 --rc genhtml_legend=1 00:06:34.176 --rc geninfo_all_blocks=1 00:06:34.176 --rc geninfo_unexecuted_blocks=1 00:06:34.176 00:06:34.176 ' 00:06:34.176 00:14:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.176 00:14:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2020888 00:06:34.176 00:14:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2020888 00:06:34.176 00:14:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2020888 ']' 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.176 00:14:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.176 [2024-10-09 00:14:04.793923] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:34.176 [2024-10-09 00:14:04.794014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020888 ] 00:06:34.434 [2024-10-09 00:14:04.899853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.695 [2024-10-09 00:14:05.092963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.265 00:14:05 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.265 00:14:05 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:35.265 00:14:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:35.529 { 00:06:35.529 "version": "SPDK v25.01-pre git sha1 6101e4048", 00:06:35.529 "fields": { 00:06:35.529 "major": 25, 00:06:35.529 "minor": 1, 00:06:35.529 "patch": 0, 00:06:35.529 "suffix": "-pre", 00:06:35.529 "commit": "6101e4048" 00:06:35.529 } 00:06:35.529 } 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.529 00:14:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]] 00:06:35.529 00:14:06 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.787 request: 00:06:35.787 { 00:06:35.787 "method": "env_dpdk_get_mem_stats", 00:06:35.787 "req_id": 1 00:06:35.787 } 00:06:35.787 Got JSON-RPC error response 00:06:35.787 response: 00:06:35.787 { 00:06:35.787 "code": -32601, 00:06:35.787 "message": "Method not found" 00:06:35.787 } 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.787 00:14:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2020888 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2020888 ']' 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2020888 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2020888 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2020888' 00:06:35.787 killing process with pid 2020888 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@969 -- # kill 2020888 00:06:35.787 00:14:06 app_cmdline -- common/autotest_common.sh@974 -- # wait 2020888 00:06:38.316 00:06:38.316 real 0m4.260s 00:06:38.316 user 0m4.466s 00:06:38.316 sys 0m0.618s 00:06:38.316 00:14:08 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.316 00:14:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.316 ************************************ 00:06:38.316 END TEST app_cmdline 00:06:38.316 ************************************ 00:06:38.316 00:14:08 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh 00:06:38.316 00:14:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.316 00:14:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.316 00:14:08 -- common/autotest_common.sh@10 -- # set +x 00:06:38.316 ************************************ 00:06:38.316 START TEST version 00:06:38.316 ************************************ 00:06:38.316 00:14:08 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app/version.sh 00:06:38.316 * Looking for test storage... 00:06:38.316 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/app 00:06:38.316 00:14:08 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.316 00:14:08 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.316 00:14:08 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.575 00:14:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.575 00:14:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.575 00:14:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.575 00:14:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.575 00:14:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.575 00:14:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.575 00:14:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.575 00:14:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.575 00:14:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.575 00:14:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.575 00:14:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.575 00:14:09 version -- scripts/common.sh@344 -- # case "$op" in 00:06:38.575 00:14:09 version -- scripts/common.sh@345 -- # : 1 00:06:38.575 00:14:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.575 00:14:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.575 00:14:09 version -- scripts/common.sh@365 -- # decimal 1 00:06:38.575 00:14:09 version -- scripts/common.sh@353 -- # local d=1 00:06:38.575 00:14:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.575 00:14:09 version -- scripts/common.sh@355 -- # echo 1 00:06:38.575 00:14:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.575 00:14:09 version -- scripts/common.sh@366 -- # decimal 2 00:06:38.575 00:14:09 version -- scripts/common.sh@353 -- # local d=2 00:06:38.575 00:14:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.575 00:14:09 version -- scripts/common.sh@355 -- # echo 2 00:06:38.575 00:14:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.575 00:14:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.575 00:14:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.575 00:14:09 version -- scripts/common.sh@368 -- # return 0 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.575 --rc genhtml_branch_coverage=1 00:06:38.575 --rc genhtml_function_coverage=1 00:06:38.575 --rc genhtml_legend=1 00:06:38.575 --rc geninfo_all_blocks=1 00:06:38.575 --rc geninfo_unexecuted_blocks=1 00:06:38.575 00:06:38.575 ' 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.575 --rc genhtml_branch_coverage=1 00:06:38.575 --rc genhtml_function_coverage=1 00:06:38.575 --rc genhtml_legend=1 00:06:38.575 --rc geninfo_all_blocks=1 00:06:38.575 --rc geninfo_unexecuted_blocks=1 00:06:38.575 00:06:38.575 ' 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.575 --rc genhtml_branch_coverage=1 00:06:38.575 --rc genhtml_function_coverage=1 00:06:38.575 --rc genhtml_legend=1 00:06:38.575 --rc geninfo_all_blocks=1 00:06:38.575 --rc geninfo_unexecuted_blocks=1 00:06:38.575 00:06:38.575 ' 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.575 --rc genhtml_branch_coverage=1 00:06:38.575 --rc genhtml_function_coverage=1 00:06:38.575 --rc genhtml_legend=1 00:06:38.575 --rc geninfo_all_blocks=1 00:06:38.575 --rc geninfo_unexecuted_blocks=1 00:06:38.575 00:06:38.575 ' 00:06:38.575 00:14:09 version -- app/version.sh@17 -- # get_header_version major 00:06:38.575 00:14:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # cut -f2 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.575 00:14:09 version -- app/version.sh@17 -- # major=25 00:06:38.575 00:14:09 version -- app/version.sh@18 -- # get_header_version minor 00:06:38.575 00:14:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # cut -f2 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.575 00:14:09 version -- app/version.sh@18 -- # minor=1 00:06:38.575 00:14:09 version -- app/version.sh@19 -- # get_header_version patch 00:06:38.575 00:14:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # cut -f2 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.575 00:14:09 version -- app/version.sh@19 -- # patch=0 00:06:38.575 00:14:09 version -- app/version.sh@20 -- # get_header_version suffix 00:06:38.575 00:14:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/include/spdk/version.h 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # cut -f2 00:06:38.575 00:14:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:38.575 00:14:09 version -- app/version.sh@20 -- # suffix=-pre 00:06:38.575 00:14:09 version -- app/version.sh@22 -- # version=25.1 00:06:38.575 00:14:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.575 00:14:09 version -- app/version.sh@28 -- # version=25.1rc0 00:06:38.575 00:14:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/vfio-user-phy-autotest/spdk/python 00:06:38.575 00:14:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.575 00:14:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:38.575 00:14:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:38.575 00:06:38.575 real 0m0.235s 00:06:38.575 user 0m0.158s 00:06:38.575 sys 0m0.116s 00:06:38.575 00:14:09 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.575 00:14:09 version -- common/autotest_common.sh@10 -- # set +x 00:06:38.575 ************************************ 00:06:38.575 END TEST version 00:06:38.575 ************************************ 00:06:38.575 00:14:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:38.575 00:14:09 -- spdk/autotest.sh@194 -- # uname -s 00:06:38.575 00:14:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:38.575 00:14:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.575 00:14:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:38.575 00:14:09 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:38.575 00:14:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.575 00:14:09 -- common/autotest_common.sh@10 -- # set +x 00:06:38.575 00:14:09 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@311 -- # '[' 1 -eq 1 ']' 00:06:38.575 00:14:09 -- spdk/autotest.sh@312 -- # HUGENODE=0 00:06:38.575 00:14:09 -- spdk/autotest.sh@312 -- # run_test vfio_user_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso 00:06:38.575 00:14:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.575 00:14:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.575 00:14:09 -- common/autotest_common.sh@10 -- # set +x 00:06:38.575 ************************************ 00:06:38.575 START TEST vfio_user_qemu 00:06:38.575 ************************************ 00:06:38.575 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh --iso 00:06:38.835 * Looking for test storage... 00:06:38.835 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@344 -- # case "$op" in 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@345 -- # : 1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@365 -- # decimal 1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@353 -- # local d=1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@355 -- # echo 1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@366 -- # decimal 2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@353 -- # local d=2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@355 -- # echo 2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.835 00:14:09 vfio_user_qemu -- scripts/common.sh@368 -- # return 0 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.835 --rc genhtml_branch_coverage=1 00:06:38.835 --rc genhtml_function_coverage=1 00:06:38.835 --rc genhtml_legend=1 00:06:38.835 --rc geninfo_all_blocks=1 00:06:38.835 --rc geninfo_unexecuted_blocks=1 00:06:38.835 00:06:38.835 ' 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.835 --rc genhtml_branch_coverage=1 00:06:38.835 --rc genhtml_function_coverage=1 00:06:38.835 --rc genhtml_legend=1 00:06:38.835 --rc geninfo_all_blocks=1 00:06:38.835 --rc geninfo_unexecuted_blocks=1 00:06:38.835 00:06:38.835 ' 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.835 --rc genhtml_branch_coverage=1 00:06:38.835 --rc genhtml_function_coverage=1 00:06:38.835 --rc genhtml_legend=1 00:06:38.835 --rc geninfo_all_blocks=1 00:06:38.835 --rc geninfo_unexecuted_blocks=1 00:06:38.835 00:06:38.835 ' 00:06:38.835 00:14:09 vfio_user_qemu -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.835 --rc genhtml_branch_coverage=1 00:06:38.835 --rc genhtml_function_coverage=1 00:06:38.835 --rc genhtml_legend=1 00:06:38.835 --rc geninfo_all_blocks=1 00:06:38.835 --rc geninfo_unexecuted_blocks=1 00:06:38.835 00:06:38.835 ' 00:06:38.835 00:14:09 vfio_user_qemu -- vfio_user/vfio_user.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:06:38.835 00:14:09 vfio_user_qemu -- vfio_user/common.sh@6 -- # : 128 00:06:38.835 00:14:09 vfio_user_qemu -- vfio_user/common.sh@7 -- # : 512 00:06:38.835 00:14:09 vfio_user_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@6 -- # : false 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@7 -- # : /root/vhost_test 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@9 -- # : qemu-img 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/vfio_user.sh 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:06:38.835 00:14:09 vfio_user_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:06:38.836 00:14:09 vfio_user_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:06:38.836 00:14:09 vfio_user_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/cgroups.sh@10 -- # echo 2 00:06:38.836 00:14:09 vfio_user_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:06:38.836 00:14:09 vfio_user_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:38.836 00:14:09 vfio_user_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:06:38.836 00:14:09 vfio_user_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:38.836 00:14:09 vfio_user_qemu -- vfio_user/vfio_user.sh@11 -- # echo 'Running SPDK vfio-user fio autotest...' 00:06:38.836 Running SPDK vfio-user fio autotest... 00:06:38.836 00:14:09 vfio_user_qemu -- vfio_user/vfio_user.sh@13 -- # vhosttestinit 00:06:38.836 00:14:09 vfio_user_qemu -- vhost/common.sh@37 -- # '[' iso == iso ']' 00:06:38.836 00:14:09 vfio_user_qemu -- vhost/common.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh 00:06:41.367 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:41.367 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:41.367 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:41.367 00:14:11 vfio_user_qemu -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]] 00:06:41.367 00:14:11 vfio_user_qemu -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:41.367 00:14:11 vfio_user_qemu -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:41.367 00:14:11 vfio_user_qemu -- vfio_user/vfio_user.sh@15 -- # run_test vfio_user_nvme_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh 00:06:41.367 00:14:11 vfio_user_qemu -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.367 00:14:11 vfio_user_qemu -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.367 00:14:11 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:06:41.627 ************************************ 00:06:41.627 START TEST vfio_user_nvme_fio 00:06:41.627 ************************************ 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh 00:06:41.627 * Looking for test storage... 00:06:41.627 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.627 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@344 -- # case "$op" in 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@345 -- # : 1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # decimal 1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # decimal 2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@353 -- # local d=2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@355 -- # echo 2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scripts/common.sh@368 -- # return 0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 00:06:41.628 ' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 00:06:41.628 ' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 00:06:41.628 ' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.628 --rc genhtml_branch_coverage=1 00:06:41.628 --rc genhtml_function_coverage=1 00:06:41.628 --rc genhtml_legend=1 00:06:41.628 --rc geninfo_all_blocks=1 00:06:41.628 --rc geninfo_unexecuted_blocks=1 00:06:41.628 00:06:41.628 ' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@6 -- # : 128 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@7 -- # : 512 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@6 -- # : false 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@7 -- # : /root/vhost_test 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@9 -- # : qemu-img 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_fio.sh 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@2 -- # vhost_0_main_core=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # check_cgroup 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@10 -- # echo 2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]' 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9 00:06:41.628 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # get_vhost_dir 0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@13 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@15 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@16 -- # vm_no=2 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@18 -- # trap clean_vfio_user EXIT 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@19 -- # vhosttestinit 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@37 -- # '[' '' == iso ']' 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]] 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@21 -- # timing_enter start_vfio_user 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@22 -- # vfio_user_run 0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@11 -- # local vhost_name=0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # get_vhost_dir 0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@20 -- # timing_enter vfio_user_start 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@22 -- # nvmfpid=2023150 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@23 -- # echo 2023150 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@25 -- # echo 'Process pid: 2023150' 00:06:41.887 Process pid: 2023150 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@26 -- # echo 'waiting for app to run...' 00:06:41.887 waiting for app to run... 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@27 -- # waitforlisten 2023150 /root/vhost_test/vhost/0/rpc.sock 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@831 -- # '[' -z 2023150 ']' 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@835 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...' 00:06:41.887 Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock... 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.887 00:14:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:41.887 [2024-10-09 00:14:12.370453] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:06:41.887 [2024-10-09 00:14:12.370548] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023150 ] 00:06:41.887 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.146 [2024-10-09 00:14:12.670459] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.404 [2024-10-09 00:14:12.877121] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.404 [2024-10-09 00:14:12.877136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.404 [2024-10-09 00:14:12.877234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.404 [2024-10-09 00:14:12.877246] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.662 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.662 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@864 -- # return 0 00:06:42.662 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@30 -- # timing_exit vfio_user_start 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # seq 0 2 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no) 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/0/muser 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/0/muser 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/0/muser/domain/muser0/0 00:06:42.920 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode0 -s SPDK000 -a 00:06:43.178 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no )) 00:06:43.178 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc0 00:06:43.436 Malloc0 00:06:43.436 00:14:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode0 Malloc0 00:06:43.693 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode0 -t VFIOUSER -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0 00:06:43.951 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no) 00:06:43.951 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/1/muser 00:06:43.951 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/1/muser 00:06:43.951 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1 00:06:43.951 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a 00:06:44.210 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no )) 00:06:44.210 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_create 128 512 -b Malloc1 00:06:44.476 Malloc1 00:06:44.476 00:14:14 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@38 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:06:44.476 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0 00:06:44.735 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@27 -- # for i in $(seq 0 $vm_no) 00:06:44.735 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@28 -- # vm_muser_dir=/root/vhost_test/vms/2/muser 00:06:44.735 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@29 -- # rm -rf /root/vhost_test/vms/2/muser 00:06:44.735 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@30 -- # mkdir -p /root/vhost_test/vms/2/muser/domain/muser2/2 00:06:44.735 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@32 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -s SPDK002 -a 00:06:44.993 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@33 -- # (( i == vm_no )) 00:06:44.993 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock load_subsystem_config 00:06:44.993 00:14:15 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@35 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Nvme0n1 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@43 -- # timing_exit start_vfio_user 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@45 -- # used_vms= 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@46 -- # timing_enter launch_vms 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.377 00:14:18 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:48.377 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # seq 0 2 00:06:48.377 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no) 00:06:48.377 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=0 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=0 00:06:48.377 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@511 -- # xtrace_disable 00:06:48.377 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:48.636 WARN: removing existing VM in '/root/vhost_test/vms/0' 00:06:48.636 INFO: Creating new VM in /root/vhost_test/vms/0 00:06:48.636 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:06:48.636 INFO: TASK MASK: 4-5 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@664 -- # local node_num=0 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@665 -- # local boot_disk_present=false 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:06:48.636 INFO: NUMA NODE: 0 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@670 -- # [[ -n '' ]] 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:06:48.636 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@694 -- # IFS=, 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@695 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@695 -- # disk_type=vfio_user 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@697 -- # case $disk_type in 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@751 -- # notice 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl' 00:06:48.637 INFO: using socket /root/vhost_test/vms/0/domain/muser0/0/cntrl 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@752 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@753 -- # [[ 0 == '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@773 -- # [[ -n '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@778 -- # (( 0 )) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh' 00:06:48.637 INFO: Saving to /root/vhost_test/vms/0/run.sh 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # cat 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 4-5 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/0/muser/domain/muser0/0/cntrl 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/0/run.sh 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@820 -- # echo 10000 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@821 -- # echo 10001 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@822 -- # echo 10002 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/0/migration_port 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@825 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10004 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 100 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@830 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 0' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=1 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@511 -- # xtrace_disable 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:48.637 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:06:48.637 INFO: Creating new VM in /root/vhost_test/vms/1 00:06:48.637 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:06:48.637 INFO: TASK MASK: 6-7 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@664 -- # local node_num=0 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@665 -- # local boot_disk_present=false 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:06:48.637 INFO: NUMA NODE: 0 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@670 -- # [[ -n '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@694 -- # IFS=, 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@695 -- # [[ -z '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@695 -- # disk_type=vfio_user 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@697 -- # case $disk_type in 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@751 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:06:48.637 INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@752 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl") 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@753 -- # [[ 1 == '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@773 -- # [[ -n '' ]] 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@778 -- # (( 0 )) 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:06:48.637 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:06:48.637 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # cat 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@820 -- # echo 10100 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@821 -- # echo 10101 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@822 -- # echo 10102 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@825 -- # [[ -z '' ]] 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10104 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 101 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@830 -- # [[ -z '' ]] 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # [[ -z '' ]] 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 1' 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@47 -- # for i in $(seq 0 $vm_no) 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@48 -- # vm_setup --disk-type=vfio_user --force=2 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --memory=768 --disks=2 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@511 -- # xtrace_disable 00:06:48.638 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:48.638 WARN: removing existing VM in '/root/vhost_test/vms/2' 00:06:48.638 INFO: Creating new VM in /root/vhost_test/vms/2 00:06:48.638 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:06:48.638 INFO: TASK MASK: 8-9 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@664 -- # local node_num=0 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@665 -- # local boot_disk_present=false 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:06:48.897 INFO: NUMA NODE: 0 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@670 -- # [[ -n '' ]] 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@679 -- # [[ -z '' ]] 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@694 -- # IFS=, 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@695 -- # [[ -z '' ]] 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@695 -- # disk_type=vfio_user 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@697 -- # case $disk_type in 00:06:48.897 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@751 -- # notice 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl' 00:06:48.898 INFO: using socket /root/vhost_test/vms/2/domain/muser2/2/cntrl 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@752 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl") 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@753 -- # [[ 2 == '' ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@773 -- # [[ -n '' ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@778 -- # (( 0 )) 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/2/run.sh' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/2/run.sh' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/2/run.sh' 00:06:48.898 INFO: Saving to /root/vhost_test/vms/2/run.sh 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # cat 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 8-9 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 768 --enable-kvm -cpu host -smp 2 -vga std -vnc :102 -daemonize -object memory-backend-file,id=mem,size=768M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10202,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/2/qemu.pid -serial file:/root/vhost_test/vms/2/serial.log -D /root/vhost_test/vms/2/qemu.log -chardev file,path=/root/vhost_test/vms/2/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10200-:22,hostfwd=tcp::10201-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/2/muser/domain/muser2/2/cntrl 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/2/run.sh 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@820 -- # echo 10200 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@821 -- # echo 10201 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@822 -- # echo 10202 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/2/migration_port 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@825 -- # [[ -z '' ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@827 -- # echo 10204 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@828 -- # echo 102 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@830 -- # [[ -z '' ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@831 -- # [[ -z '' ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@49 -- # used_vms+=' 2' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@52 -- # vm_run 0 1 2 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@836 -- # local run_all=false 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@837 -- # local vms_to_run= 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@839 -- # getopts a-: optchar 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@849 -- # false 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@852 -- # shift 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@853 -- # for vm in "$@" 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@854 -- # vm_num_is_valid 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # vms_to_run+=' 0' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@853 -- # for vm in "$@" 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@854 -- # vm_num_is_valid 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@853 -- # for vm in "$@" 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@854 -- # vm_num_is_valid 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/2/run.sh ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@859 -- # vms_to_run+=' 2' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@864 -- # vm_is_running 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/0 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@366 -- # return 1 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/0/run.sh' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh' 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh' 00:06:48.898 INFO: running /root/vhost_test/vms/0/run.sh 00:06:48.898 00:14:19 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # /root/vhost_test/vms/0/run.sh 00:06:48.898 Running VM in /root/vhost_test/vms/0 00:06:49.157 Waiting for QEMU pid file 00:06:49.416 [2024-10-09 00:14:19.812629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller 00:06:50.353 === qemu.log === 00:06:50.353 === qemu.log === 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@864 -- # vm_is_running 1 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@366 -- # return 1 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:06:50.353 INFO: running /root/vhost_test/vms/1/run.sh 00:06:50.353 00:14:20 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:06:50.353 Running VM in /root/vhost_test/vms/1 00:06:50.353 Waiting for QEMU pid file 00:06:50.611 [2024-10-09 00:14:21.104683] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller 00:06:51.546 === qemu.log === 00:06:51.546 === qemu.log === 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@864 -- # vm_is_running 2 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 2 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/2 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]] 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@366 -- # return 1 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/2/run.sh' 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/2/run.sh' 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/2/run.sh' 00:06:51.546 INFO: running /root/vhost_test/vms/2/run.sh 00:06:51.546 00:14:21 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@870 -- # /root/vhost_test/vms/2/run.sh 00:06:51.546 Running VM in /root/vhost_test/vms/2 00:06:51.804 Waiting for QEMU pid file 00:06:51.804 [2024-10-09 00:14:22.401453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller 00:06:52.738 === qemu.log === 00:06:52.738 === qemu.log === 00:06:52.738 00:14:23 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@53 -- # vm_wait_for_boot 60 0 1 2 00:06:52.738 00:14:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@906 -- # assert_number 60 00:06:52.738 00:14:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:06:52.738 00:14:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@274 -- # return 0 00:06:52.738 00:14:23 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@908 -- # xtrace_disable 00:06:52.738 00:14:23 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:06:52.738 INFO: Waiting for VMs to boot 00:06:52.738 INFO: waiting for VM0 (/root/vhost_test/vms/0) 00:07:00.852 [2024-10-09 00:14:31.000595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller 00:07:00.852 [2024-10-09 00:14:31.020724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller 00:07:00.852 [2024-10-09 00:14:31.024736] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: enabling controller 00:07:01.420 [2024-10-09 00:14:31.765356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:07:01.420 [2024-10-09 00:14:31.774414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:07:01.420 [2024-10-09 00:14:31.778451] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller 00:07:02.354 [2024-10-09 00:14:32.981471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller 00:07:02.613 [2024-10-09 00:14:32.990503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller 00:07:02.613 [2024-10-09 00:14:32.994536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: enabling controller 00:07:14.824 00:07:14.824 INFO: VM0 ready 00:07:14.824 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:14.824 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:14.824 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:07:15.083 00:07:15.083 INFO: VM1 ready 00:07:15.083 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:15.344 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:15.911 INFO: waiting for VM2 (/root/vhost_test/vms/2) 00:07:16.479 00:07:16.479 INFO: VM2 ready 00:07:16.479 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:16.745 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:17.685 INFO: all VMs ready 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@966 -- # return 0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@55 -- # timing_exit launch_vms 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@57 -- # timing_enter run_vm_cmd 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@59 -- # fio_disks= 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_0_qemu_mask 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-0-4-5 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 0 'hostname VM-0-4-5' 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:17.685 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'hostname VM-0-4-5' 00:07:17.685 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@970 -- # local OPTIND optchar 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@971 -- # local readonly= 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@972 -- # local fio_bin= 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@974 -- # case "$optchar" in 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@976 -- # case "$OPTARG" in 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local fio_bin=/usr/src/fio-static/fio 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@986 -- # shift 1 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@987 -- # for vm_num in "$@" 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@988 -- # notice 'Starting fio server on VM0' 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM0' 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM0' 00:07:17.945 INFO: Starting fio server on VM0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@989 -- # [[ /usr/src/fio-static/fio != '' ]] 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@990 -- # vm_exec 0 'cat > /root/fio; chmod +x /root/fio' 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:17.945 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/fio; chmod +x /root/fio' 00:07:17.945 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@991 -- # vm_exec 0 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:18.210 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:07:18.210 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # vm_exec 0 'grep -l SPDK /sys/class/nvme/*/model' 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # awk -F/ '{print $5"n1"}' 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:18.472 00:14:48 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model' 00:07:18.472 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # SCSI_DISK=nvme0n1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1039 -- # [[ -z nvme0n1 ]] 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=0:/dev/nvme0n1' 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_1_qemu_mask 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-1-6-7 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 1 'hostname VM-1-6-7' 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:18.472 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7' 00:07:18.732 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@970 -- # local OPTIND optchar 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@971 -- # local readonly= 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@972 -- # local fio_bin= 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@974 -- # case "$optchar" in 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@976 -- # case "$OPTARG" in 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local fio_bin=/usr/src/fio-static/fio 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@986 -- # shift 1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@987 -- # for vm_num in "$@" 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@988 -- # notice 'Starting fio server on VM1' 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1' 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1' 00:07:18.732 INFO: Starting fio server on VM1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@989 -- # [[ /usr/src/fio-static/fio != '' ]] 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@990 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio' 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:18.732 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio' 00:07:18.990 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@991 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:18.990 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:07:19.249 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 1 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # vm_exec 1 'grep -l SPDK /sys/class/nvme/*/model' 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # awk -F/ '{print $5"n1"}' 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:19.249 00:14:49 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model' 00:07:19.249 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # SCSI_DISK=nvme0n1 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1039 -- # [[ -z nvme0n1 ]] 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=1:/dev/nvme0n1' 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@61 -- # for vm_num in $used_vms 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@62 -- # qemu_mask_param=VM_2_qemu_mask 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@64 -- # host_name=VM-2-8-9 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@65 -- # vm_exec 2 'hostname VM-2-8-9' 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:19.508 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'hostname VM-2-8-9' 00:07:19.508 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@66 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@970 -- # local OPTIND optchar 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@971 -- # local readonly= 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@972 -- # local fio_bin= 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@974 -- # case "$optchar" in 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@976 -- # case "$OPTARG" in 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@977 -- # local fio_bin=/usr/src/fio-static/fio 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@986 -- # shift 1 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@987 -- # for vm_num in "$@" 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@988 -- # notice 'Starting fio server on VM2' 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM2' 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM2' 00:07:19.768 INFO: Starting fio server on VM2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@989 -- # [[ /usr/src/fio-static/fio != '' ]] 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@990 -- # vm_exec 2 'cat > /root/fio; chmod +x /root/fio' 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:19.768 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/fio; chmod +x /root/fio' 00:07:19.768 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@991 -- # vm_exec 2 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:20.026 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:20.027 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.027 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.027 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:20.027 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:20.027 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:07:20.027 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@67 -- # vm_check_nvme_location 2 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # vm_exec 2 'grep -l SPDK /sys/class/nvme/*/model' 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # awk -F/ '{print $5"n1"}' 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:20.296 00:14:50 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'grep -l SPDK /sys/class/nvme/*/model' 00:07:20.296 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1038 -- # SCSI_DISK=nvme0n1 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1039 -- # [[ -z nvme0n1 ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # printf :/dev/%s nvme0n1 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@69 -- # fio_disks+=' --vm=2:/dev/nvme0n1' 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@72 -- # job_file=default_integrity.job 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@73 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=0:/dev/nvme0n1 --vm=1:/dev/nvme0n1 --vm=2:/dev/nvme0n1 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1046 -- # local arg 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1047 -- # local job_file= 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1048 -- # local fio_bin= 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1049 -- # vms=() 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1049 -- # local vms 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1050 -- # local out= 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1051 -- # local vm 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1052 -- # local run_server_mode=true 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1053 -- # local run_plugin_mode=false 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1054 -- # local fio_start_cmd 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1055 -- # local fio_output_format=normal 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1056 -- # local fio_gtod_reduce=false 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1057 -- # local wait_for_fio=true 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1062 -- # local fio_bin=/usr/src/fio-static/fio 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1061 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1065 -- # local out=/root/vhost_test/fio_results 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1066 -- # mkdir -p /root/vhost_test/fio_results 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # vms+=("${arg#*=}") 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # vms+=("${arg#*=}") 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1063 -- # vms+=("${arg#*=}") 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1085 -- # [[ -n /usr/src/fio-static/fio ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1085 -- # [[ ! -r /usr/src/fio-static/fio ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1090 -- # [[ -z /usr/src/fio-static/fio ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1094 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1099 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never ' 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1101 -- # local job_fname 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1102 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1102 -- # job_fname=default_integrity.job 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1103 -- # log_fname=default_integrity.log 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1104 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal ' 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1107 -- # for vm in "${vms[@]}" 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local vm_num=0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # local vmdisks=/dev/nvme0n1 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=0)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1112 -- # vm_exec 0 'cat > /root/default_integrity.job' 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:20.555 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'cat > /root/default_integrity.job' 00:07:20.555 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # false 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # vm_exec 0 cat /root/default_integrity.job 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:20.814 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 cat /root/default_integrity.job 00:07:20.814 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:21.072 [global] 00:07:21.072 blocksize_range=4k-512k 00:07:21.072 iodepth=512 00:07:21.072 iodepth_batch=128 00:07:21.072 iodepth_low=256 00:07:21.072 ioengine=libaio 00:07:21.072 size=1G 00:07:21.072 io_size=4G 00:07:21.072 filename=/dev/nvme0n1 00:07:21.072 group_reporting 00:07:21.072 thread 00:07:21.072 numjobs=1 00:07:21.072 direct=1 00:07:21.072 rw=randwrite 00:07:21.072 do_verify=1 00:07:21.072 verify=md5 00:07:21.072 verify_backlog=1024 00:07:21.072 fsync_on_close=1 00:07:21.072 verify_state_save=0 00:07:21.072 [nvme-host] 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1120 -- # true 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # vm_fio_socket 0 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 0 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/0 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/0/fio_socket 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # fio_start_cmd+='--client=127.0.0.1,10001 --remote-config /root/default_integrity.job ' 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1124 -- # true 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1107 -- # for vm in "${vms[@]}" 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local vm_num=1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # local vmdisks=/dev/nvme0n1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1112 -- # vm_exec 1 'cat > /root/default_integrity.job' 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:21.072 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job' 00:07:21.072 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # false 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # vm_exec 1 cat /root/default_integrity.job 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:21.331 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job 00:07:21.331 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:21.590 [global] 00:07:21.590 blocksize_range=4k-512k 00:07:21.590 iodepth=512 00:07:21.590 iodepth_batch=128 00:07:21.590 iodepth_low=256 00:07:21.590 ioengine=libaio 00:07:21.590 size=1G 00:07:21.590 io_size=4G 00:07:21.590 filename=/dev/nvme0n1 00:07:21.590 group_reporting 00:07:21.590 thread 00:07:21.590 numjobs=1 00:07:21.590 direct=1 00:07:21.590 rw=randwrite 00:07:21.590 do_verify=1 00:07:21.590 verify=md5 00:07:21.590 verify_backlog=1024 00:07:21.590 fsync_on_close=1 00:07:21.590 verify_state_save=0 00:07:21.590 [nvme-host] 00:07:21.590 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1120 -- # true 00:07:21.590 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # vm_fio_socket 1 00:07:21.590 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1 00:07:21.590 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.590 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/fio_socket 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job ' 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1124 -- # true 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1107 -- # for vm in "${vms[@]}" 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1108 -- # local vm_num=2 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1109 -- # local vmdisks=/dev/nvme0n1 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1111 -- # sed 's@filename=@filename=/dev/nvme0n1@;s@description=\(.*\)@description=\1 (VM=2)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1112 -- # vm_exec 2 'cat > /root/default_integrity.job' 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:21.591 00:14:51 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'cat > /root/default_integrity.job' 00:07:21.591 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1114 -- # false 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1118 -- # vm_exec 2 cat /root/default_integrity.job 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:21.591 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 cat /root/default_integrity.job 00:07:21.850 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:21.850 [global] 00:07:21.850 blocksize_range=4k-512k 00:07:21.850 iodepth=512 00:07:21.850 iodepth_batch=128 00:07:21.850 iodepth_low=256 00:07:21.850 ioengine=libaio 00:07:21.850 size=1G 00:07:21.850 io_size=4G 00:07:21.850 filename=/dev/nvme0n1 00:07:21.850 group_reporting 00:07:21.850 thread 00:07:21.850 numjobs=1 00:07:21.850 direct=1 00:07:21.850 rw=randwrite 00:07:21.850 do_verify=1 00:07:21.850 verify=md5 00:07:21.850 verify_backlog=1024 00:07:21.850 fsync_on_close=1 00:07:21.850 verify_state_save=0 00:07:21.850 [nvme-host] 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1120 -- # true 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # vm_fio_socket 2 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@319 -- # vm_num_is_valid 2 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/2 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/2/fio_socket 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1121 -- # fio_start_cmd+='--client=127.0.0.1,10201 --remote-config /root/default_integrity.job ' 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1124 -- # true 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1140 -- # true 00:07:21.850 00:14:52 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1154 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10001 --remote-config /root/default_integrity.job --client=127.0.0.1,10101 --remote-config /root/default_integrity.job --client=127.0.0.1,10201 --remote-config /root/default_integrity.job 00:07:34.064 00:15:03 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1155 -- # sleep 1 00:07:34.064 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1157 -- # [[ normal == \j\s\o\n ]] 00:07:34.064 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1165 -- # [[ ! -n '' ]] 00:07:34.064 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@1166 -- # cat /root/vhost_test/fio_results/default_integrity.log 00:07:34.064 hostname=VM-2-8-9, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1 00:07:34.064 hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1 00:07:34.064 hostname=VM-0-4-5, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1 00:07:34.064 nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512 00:07:34.064 nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512 00:07:34.064 nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512 00:07:34.064 Starting 1 thread 00:07:34.064 Starting 1 thread 00:07:34.064 Starting 1 thread 00:07:34.064 00:07:34.064 nvme-host: (groupid=0, jobs=1): err= 0: pid=950: Wed Oct 9 00:15:03 2024 00:07:34.064 read: IOPS=1176, BW=229MiB/s (240MB/s)(2072MiB/9042msec) 00:07:34.064 slat (usec): min=18, max=20394, avg=9374.78, stdev=5961.32 00:07:34.064 clat (usec): min=742, max=43946, avg=18448.61, stdev=10787.43 00:07:34.064 lat (usec): min=4155, max=44088, avg=27823.39, stdev=9979.40 00:07:34.064 clat percentiles (usec): 00:07:34.064 | 1.00th=[ 2573], 5.00th=[ 4424], 10.00th=[ 5604], 20.00th=[ 8029], 00:07:34.064 | 30.00th=[ 9896], 40.00th=[11994], 50.00th=[18220], 60.00th=[21890], 00:07:34.064 | 70.00th=[24511], 80.00th=[27657], 90.00th=[36439], 95.00th=[38011], 00:07:34.064 | 99.00th=[39584], 99.50th=[40109], 99.90th=[43779], 99.95th=[43779], 00:07:34.064 | 99.99th=[43779] 00:07:34.065 write: IOPS=2477, BW=483MiB/s (506MB/s)(2072MiB/4293msec); 0 zone resets 00:07:34.065 slat (usec): min=221, max=65119, avg=22584.44, stdev=14217.44 00:07:34.065 clat (usec): min=288, max=162938, avg=53235.57, stdev=39857.43 00:07:34.065 lat (msec): min=3, max=170, avg=75.82, stdev=44.43 00:07:34.065 clat percentiles (msec): 00:07:34.065 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 11], 00:07:34.065 | 30.00th=[ 17], 40.00th=[ 34], 50.00th=[ 50], 60.00th=[ 57], 00:07:34.065 | 70.00th=[ 80], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 120], 00:07:34.065 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:07:34.065 | 99.99th=[ 163] 00:07:34.065 bw ( KiB/s): min=157144, max=314288, per=45.93%, avg=226950.89, stdev=80309.29, samples=18 00:07:34.065 iops : min= 788, max= 1576, avg=1138.00, stdev=402.66, samples=18 00:07:34.065 lat (usec) : 500=0.20%, 750=0.47% 00:07:34.065 lat (msec) : 4=1.92%, 10=21.43%, 20=19.31%, 50=32.52%, 100=14.44% 00:07:34.065 lat (msec) : 250=9.72% 00:07:34.065 cpu : usr=84.16%, sys=1.88%, ctx=1223, majf=0, minf=16 00:07:34.065 IO depths : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8% 00:07:34.065 submit : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0% 00:07:34.065 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:07:34.065 issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:34.065 latency : target=0, window=0, percentile=100.00%, depth=512 00:07:34.065 00:07:34.065 Run status group 0 (all jobs): 00:07:34.065 READ: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=2072MiB (2172MB), run=9042-9042msec 00:07:34.065 WRITE: bw=483MiB/s (506MB/s), 483MiB/s-483MiB/s (506MB/s-506MB/s), io=2072MiB (2172MB), run=4293-4293msec 00:07:34.065 00:07:34.065 Disk stats (read/write): 00:07:34.065 nvme0n1: ios=78/0, merge=0/0, ticks=9/0, in_queue=9, util=25.36% 00:07:34.065 00:07:34.065 nvme-host: (groupid=0, jobs=1): err= 0: pid=951: Wed Oct 9 00:15:03 2024 00:07:34.065 read: IOPS=1295, BW=217MiB/s (228MB/s)(2048MiB/9421msec) 00:07:34.065 slat (usec): min=37, max=27327, avg=9027.60, stdev=6199.43 00:07:34.065 clat (msec): min=4, max=315, avg=139.77, stdev=64.84 00:07:34.065 lat (msec): min=9, max=331, avg=148.80, stdev=65.27 00:07:34.065 clat percentiles (msec): 00:07:34.065 | 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 61], 20.00th=[ 84], 00:07:34.065 | 30.00th=[ 105], 40.00th=[ 123], 50.00th=[ 138], 60.00th=[ 155], 00:07:34.065 | 70.00th=[ 171], 80.00th=[ 194], 90.00th=[ 228], 95.00th=[ 255], 00:07:34.065 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 313], 99.95th=[ 313], 00:07:34.065 | 99.99th=[ 317] 00:07:34.065 write: IOPS=1378, BW=231MiB/s (243MB/s)(2048MiB/8853msec); 0 zone resets 00:07:34.065 slat (usec): min=227, max=85762, avg=23251.59, stdev=15017.69 00:07:34.065 clat (msec): min=8, max=277, avg=119.40, stdev=58.46 00:07:34.065 lat (msec): min=9, max=308, avg=142.65, stdev=62.04 00:07:34.065 clat percentiles (msec): 00:07:34.065 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 68], 00:07:34.065 | 30.00th=[ 84], 40.00th=[ 99], 50.00th=[ 113], 60.00th=[ 128], 00:07:34.065 | 70.00th=[ 148], 80.00th=[ 169], 90.00th=[ 207], 95.00th=[ 224], 00:07:34.065 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 279], 00:07:34.065 | 99.99th=[ 279] 00:07:34.065 bw ( KiB/s): min= 6936, max=472048, per=98.18%, avg=232577.18, stdev=118388.40, samples=17 00:07:34.065 iops : min= 38, max= 2048, avg=1320.71, stdev=596.71, samples=17 00:07:34.065 lat (msec) : 10=1.51%, 20=1.61%, 50=6.43%, 100=24.83%, 250=61.67% 00:07:34.065 lat (msec) : 500=3.94% 00:07:34.065 cpu : usr=83.16%, sys=1.87%, ctx=1110, majf=0, minf=34 00:07:34.065 IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1% 00:07:34.065 submit : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6% 00:07:34.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:34.065 issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:34.065 latency : target=0, window=0, percentile=100.00%, depth=512 00:07:34.065 00:07:34.065 Run status group 0 (all jobs): 00:07:34.065 READ: bw=217MiB/s (228MB/s), 217MiB/s-217MiB/s (228MB/s-228MB/s), io=2048MiB (2147MB), run=9421-9421msec 00:07:34.065 WRITE: bw=231MiB/s (243MB/s), 231MiB/s-231MiB/s (243MB/s-243MB/s), io=2048MiB (2147MB), run=8853-8853msec 00:07:34.065 00:07:34.065 Disk stats (read/write): 00:07:34.065 nvme0n1: ios=5/0, merge=0/0, ticks=0/0, in_queue=0, util=23.50% 00:07:34.065 00:07:34.065 nvme-host: (groupid=0, jobs=1): err= 0: pid=954: Wed Oct 9 00:15:03 2024 00:07:34.065 read: IOPS=1147, BW=223MiB/s (234MB/s)(2072MiB/9272msec) 00:07:34.065 slat (usec): min=22, max=23010, avg=9334.59, stdev=6346.89 00:07:34.065 clat (usec): min=1081, max=51432, avg=20168.21, stdev=11442.41 00:07:34.065 lat (usec): min=2017, max=52272, avg=29502.80, stdev=11150.11 00:07:34.065 clat percentiles (usec): 00:07:34.065 | 1.00th=[ 1516], 5.00th=[ 6128], 10.00th=[ 8455], 20.00th=[ 9765], 00:07:34.065 | 30.00th=[11207], 40.00th=[13173], 50.00th=[19006], 60.00th=[21365], 00:07:34.065 | 70.00th=[26608], 80.00th=[31851], 90.00th=[35390], 95.00th=[41681], 00:07:34.065 | 99.00th=[48497], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:07:34.065 | 99.99th=[51643] 00:07:34.065 write: IOPS=2407, BW=469MiB/s (492MB/s)(2072MiB/4419msec); 0 zone resets 00:07:34.065 slat (usec): min=226, max=71683, avg=23582.29, stdev=15002.99 00:07:34.065 clat (msec): min=2, max=165, avg=54.19, stdev=41.03 00:07:34.065 lat (msec): min=2, max=179, avg=77.77, stdev=46.28 00:07:34.065 clat percentiles (msec): 00:07:34.065 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 12], 00:07:34.065 | 30.00th=[ 19], 40.00th=[ 33], 50.00th=[ 49], 60.00th=[ 57], 00:07:34.065 | 70.00th=[ 81], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 128], 00:07:34.065 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 165], 00:07:34.065 | 99.99th=[ 165] 00:07:34.065 bw ( KiB/s): min=157144, max=314288, per=47.27%, avg=226950.89, stdev=80309.29, samples=18 00:07:34.065 iops : min= 788, max= 1576, avg=1138.00, stdev=402.66, samples=18 00:07:34.065 lat (msec) : 2=1.11%, 4=1.56%, 10=16.43%, 20=23.62%, 50=33.26% 00:07:34.065 lat (msec) : 100=14.30%, 250=9.72% 00:07:34.065 cpu : usr=82.92%, sys=1.91%, ctx=974, majf=0, minf=16 00:07:34.065 IO depths : 1=0.0%, 2=0.6%, 4=1.2%, 8=1.8%, 16=3.6%, 32=7.8%, >=64=84.8% 00:07:34.065 submit : 0=0.0%, 4=1.8%, 8=1.8%, 16=3.2%, 32=6.4%, 64=11.8%, >=64=75.0% 00:07:34.065 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:07:34.065 issued rwts: total=10638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:34.065 latency : target=0, window=0, percentile=100.00%, depth=512 00:07:34.065 00:07:34.065 Run status group 0 (all jobs): 00:07:34.065 READ: bw=223MiB/s (234MB/s), 223MiB/s-223MiB/s (234MB/s-234MB/s), io=2072MiB (2172MB), run=9272-9272msec 00:07:34.065 WRITE: bw=469MiB/s (492MB/s), 469MiB/s-469MiB/s (492MB/s-492MB/s), io=2072MiB (2172MB), run=4419-4419msec 00:07:34.065 00:07:34.065 Disk stats (read/write): 00:07:34.065 nvme0n1: ios=80/0, merge=0/0, ticks=17/0, in_queue=17, util=27.93% 00:07:34.065 All clients: (groupid=0, jobs=3): err= 0: pid=0: Wed Oct 9 00:15:03 2024 00:07:34.065 read: IOPS=3554, BW=657Mi (689M)(6191MiB/9421msec) 00:07:34.065 slat (usec): min=18, max=27327, avg=9235.43, stdev=6174.43 00:07:34.065 clat (usec): min=742, max=315657, avg=63227.49, stdev=70524.02 00:07:34.065 lat (msec): min=2, max=331, avg=72.46, stdev=70.49 00:07:34.065 write: IOPS=3782, BW=699Mi (733M)(6191MiB/8853msec); 0 zone resets 00:07:34.065 slat (usec): min=221, max=85762, avg=23144.70, stdev=14768.54 00:07:34.065 clat (usec): min=288, max=277041, avg=77660.65, stdev=57315.94 00:07:34.065 lat (msec): min=2, max=308, avg=100.81, stdev=60.96 00:07:34.065 bw ( KiB/s): min=321224, max=1100624, per=56.68%, avg=686478.95, stdev=92355.79, samples=53 00:07:34.065 iops : min= 1614, max= 5200, avg=3596.71, stdev=464.29, samples=53 00:07:34.065 lat (usec) : 500=0.06%, 750=0.15% 00:07:34.065 lat (msec) : 2=0.35%, 4=1.11%, 10=12.58%, 20=14.23%, 50=23.24% 00:07:34.065 lat (msec) : 100=18.18%, 250=28.66%, 500=1.44% 00:07:34.065 cpu : usr=83.41%, sys=1.88%, ctx=3307, majf=0, minf=66 00:07:34.065 IO depths : 1=0.0%, 2=0.4%, 4=0.8%, 8=1.1%, 16=2.3%, 32=5.2%, >=64=90.0% 00:07:34.065 submit : 0=0.0%, 4=1.2%, 8=1.6%, 16=2.1%, 32=4.1%, 64=14.4%, >=64=76.6% 00:07:34.065 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:07:34.065 issued rwts: total=33484,33484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@75 -- # timing_exit run_vm_cmd 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@77 -- # vm_shutdown_all 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@482 -- # vm_list_all 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@459 -- # vms=() 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@459 -- # local vms 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@461 -- # (( 3 > 0 )) 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@485 -- # vm_shutdown 0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@410 -- # vm_num_is_valid 0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/0 ]] 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_is_running 0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.065 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/0/qemu.pid 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024426 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024426 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/0' 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/0' 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/0' 00:07:34.066 INFO: Shutting down virtual machine /root/vhost_test/vms/0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@425 -- # set +e 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@426 -- # vm_exec 0 'nohup sh -c '\''shutdown -h -P now'\''' 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:07:34.066 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:07:34.066 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:07:34.066 Connection to 127.0.0.1 closed by remote host. 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@426 -- # true 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@427 -- # notice 'VM0 is shutting down - wait a while to complete' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM0 is shutting down - wait a while to complete' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM0 is shutting down - wait a while to complete' 00:07:34.331 INFO: VM0 is shutting down - wait a while to complete 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@428 -- # set -e 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@485 -- # vm_shutdown 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_is_running 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024674 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024674 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:07:34.331 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@425 -- # set +e 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:07:34.331 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:07:34.331 INFO: VM1 is shutting down - wait a while to complete 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@428 -- # set -e 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@485 -- # vm_shutdown 2 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@410 -- # vm_num_is_valid 2 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/2 00:07:34.331 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/2 ]] 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@417 -- # vm_is_running 2 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 2 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/2 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]] 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/2/qemu.pid 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024931 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024931 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/2' 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/2' 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/2' 00:07:34.591 INFO: Shutting down virtual machine /root/vhost_test/vms/2 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@425 -- # set +e 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@426 -- # vm_exec 2 'nohup sh -c '\''shutdown -h -P now'\''' 00:07:34.591 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@329 -- # vm_num_is_valid 2 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@331 -- # local vm_num=2 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@332 -- # shift 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # vm_ssh_socket 2 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@312 -- # vm_num_is_valid 2 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/2 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/2/ssh_socket 00:07:34.592 00:15:04 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10200 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:07:34.592 Warning: Permanently added '[127.0.0.1]:10200' (ED25519) to the list of known hosts. 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@427 -- # notice 'VM2 is shutting down - wait a while to complete' 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'VM2 is shutting down - wait a while to complete' 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM2 is shutting down - wait a while to complete' 00:07:34.850 INFO: VM2 is shutting down - wait a while to complete 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@428 -- # set -e 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:07:34.850 INFO: Waiting for VMs to shutdown... 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 3 > 0 )) 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:34.850 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/0/qemu.pid 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024426 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024426 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 1 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024674 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024674 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 2 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 2 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/2 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]] 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/2/qemu.pid 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024931 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024931 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:34.851 00:15:05 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@493 -- # sleep 1 00:07:34.851 [2024-10-09 00:15:05.472655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/0/muser/domain/muser0/0: disabling controller 00:07:35.425 [2024-10-09 00:15:05.985840] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:07:35.690 [2024-10-09 00:15:06.236533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/2/muser/domain/muser2/2: disabling controller 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 3 > 0 )) 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@366 -- # return 1 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 1 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@366 -- # return 1 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 2 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 2 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/2 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]] 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@369 -- # local vm_pid 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/2/qemu.pid 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@370 -- # vm_pid=2024931 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@372 -- # /bin/kill -0 2024931 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@373 -- # return 0 00:07:35.690 00:15:06 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@493 -- # sleep 1 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # vm_is_running 2 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@362 -- # vm_num_is_valid 2 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/2 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]] 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@366 -- # return 1 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:07:37.068 00:15:07 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@493 -- # sleep 1 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:07:38.008 INFO: All VMs successfully shut down 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@498 -- # return 0 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@79 -- # timing_enter clean_vfio_user 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # seq 0 2 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no) 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/0/muser 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode0 -t vfiouser -a /root/vhost_test/vms/0/muser/domain/muser0/0 -s 0 00:07:38.008 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode0 00:07:38.267 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no )) 00:07:38.267 00:15:08 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc0 00:07:38.835 00:15:09 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no) 00:07:38.835 00:15:09 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/1/muser 00:07:38.835 00:15:09 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0 00:07:39.093 00:15:09 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1 00:07:39.093 00:15:09 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no )) 00:07:39.093 00:15:09 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@88 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_malloc_delete Malloc1 00:07:39.756 00:15:10 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@81 -- # for i in $(seq 0 $vm_no) 00:07:39.756 00:15:10 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@82 -- # vm_muser_dir=/root/vhost_test/vms/2/muser 00:07:39.756 00:15:10 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@83 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode2 -t vfiouser -a /root/vhost_test/vms/2/muser/domain/muser2/2 -s 0 00:07:39.756 00:15:10 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@84 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode2 00:07:40.322 00:15:10 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@85 -- # (( i == vm_no )) 00:07:40.322 00:15:10 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@86 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@92 -- # vhost_kill 0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]] 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]] 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@220 -- # local vhost_pid 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@221 -- # vhost_pid=2023150 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 2023150) app' 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 2023150) app' 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 2023150) app' 00:07:41.698 INFO: killing vhost (PID 2023150) app 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@224 -- # kill -INT 2023150 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit' 00:07:41.698 INFO: sent SIGINT to vhost app - waiting 60 seconds to exit 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i = 0 )) 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 )) 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 2023150 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@228 -- # echo . 00:07:41.698 . 00:07:41.698 00:15:12 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@229 -- # sleep 1 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i++ )) 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@226 -- # (( i < 60 )) 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@227 -- # kill -0 2023150 00:07:42.635 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (2023150) - No such process 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@231 -- # break 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@234 -- # kill -0 2023150 00:07:42.635 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (2023150) - No such process 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@239 -- # kill -0 2023150 00:07:42.635 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (2023150) - No such process 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@250 -- # timing_exit vhost_kill 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.635 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@252 -- # rm -rf /root/vhost_test/vhost/0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@254 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@93 -- # timing_exit clean_vfio_user 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@94 -- # vhosttestfini 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@54 -- # '[' '' == iso ']' 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/vfio_user_fio.sh@1 -- # clean_vfio_user 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@6 -- # vm_kill_all 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@469 -- # local vm 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@470 -- # vm_list_all 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@459 -- # vms=() 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@459 -- # local vms 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@461 -- # (( 3 > 0 )) 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/0 /root/vhost_test/vms/1 /root/vhost_test/vms/2 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@471 -- # vm_kill 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # vm_num_is_valid 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@439 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@471 -- # vm_kill 1 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # vm_num_is_valid 1 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/1 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@439 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@471 -- # vm_kill 2 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@435 -- # vm_num_is_valid 2 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@302 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/2 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/2/qemu.pid ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@439 -- # return 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@474 -- # rm -rf /root/vhost_test/vms 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- nvme/common.sh@7 -- # vhost_kill 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@202 -- # local rc=0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@203 -- # local vhost_name=0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@205 -- # [[ -z 0 ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@210 -- # local vhost_dir 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # get_vhost_dir 0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@215 -- # warning 'no vhost pid file found' 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@90 -- # message WARN 'no vhost pid file found' 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@60 -- # local verbose_out 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@61 -- # false 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@62 -- # verbose_out= 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@69 -- # local msg_type=WARN 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@70 -- # shift 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@71 -- # echo -e 'WARN: no vhost pid file found' 00:07:42.896 WARN: no vhost pid file found 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- vhost/common.sh@216 -- # return 0 00:07:42.896 00:07:42.896 real 1m1.303s 00:07:42.896 user 4m7.890s 00:07:42.896 sys 0m3.357s 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:07:42.896 ************************************ 00:07:42.896 END TEST vfio_user_nvme_fio 00:07:42.896 ************************************ 00:07:42.896 00:15:13 vfio_user_qemu -- vfio_user/vfio_user.sh@16 -- # run_test vfio_user_nvme_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh 00:07:42.896 00:15:13 vfio_user_qemu -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.896 00:15:13 vfio_user_qemu -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.896 00:15:13 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:07:42.896 ************************************ 00:07:42.896 START TEST vfio_user_nvme_restart_vm 00:07:42.896 ************************************ 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh 00:07:42.896 * Looking for test storage... 00:07:42.896 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1681 -- # lcov --version 00:07:42.896 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:43.156 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:43.156 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@344 -- # case "$op" in 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@345 -- # : 1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # decimal 1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # decimal 2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@353 -- # local d=2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@355 -- # echo 2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scripts/common.sh@368 -- # return 0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.157 --rc genhtml_branch_coverage=1 00:07:43.157 --rc genhtml_function_coverage=1 00:07:43.157 --rc genhtml_legend=1 00:07:43.157 --rc geninfo_all_blocks=1 00:07:43.157 --rc geninfo_unexecuted_blocks=1 00:07:43.157 00:07:43.157 ' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.157 --rc genhtml_branch_coverage=1 00:07:43.157 --rc genhtml_function_coverage=1 00:07:43.157 --rc genhtml_legend=1 00:07:43.157 --rc geninfo_all_blocks=1 00:07:43.157 --rc geninfo_unexecuted_blocks=1 00:07:43.157 00:07:43.157 ' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.157 --rc genhtml_branch_coverage=1 00:07:43.157 --rc genhtml_function_coverage=1 00:07:43.157 --rc genhtml_legend=1 00:07:43.157 --rc geninfo_all_blocks=1 00:07:43.157 --rc geninfo_unexecuted_blocks=1 00:07:43.157 00:07:43.157 ' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:43.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.157 --rc genhtml_branch_coverage=1 00:07:43.157 --rc genhtml_function_coverage=1 00:07:43.157 --rc genhtml_legend=1 00:07:43.157 --rc geninfo_all_blocks=1 00:07:43.157 --rc geninfo_unexecuted_blocks=1 00:07:43.157 00:07:43.157 ' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@6 -- # : 128 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@7 -- # : 512 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@6 -- # : false 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@9 -- # : qemu-img 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/vfio_user_restart_vm.sh 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2 00:07:43.157 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/nvme/common.sh 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]' 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@13 -- # get_nvme_bdfs 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1496 -- # local bdfs 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # get_vhost_dir 0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@16 -- # trap clean_vfio_user EXIT 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@18 -- # vhosttestinit 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']' 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]] 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@20 -- # vfio_user_run 0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@11 -- # local vhost_name=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@12 -- # local vfio_user_dir nvmf_pid_file rpc_py 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # get_vhost_dir 0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@14 -- # vfio_user_dir=/root/vhost_test/vhost/0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@15 -- # nvmf_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@16 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@18 -- # mkdir -p /root/vhost_test/vhost/0 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@20 -- # timing_enter vfio_user_start 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@22 -- # nvmfpid=2034424 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@23 -- # echo 2034424 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/nvmf_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@25 -- # echo 'Process pid: 2034424' 00:07:43.158 Process pid: 2034424 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@26 -- # echo 'waiting for app to run...' 00:07:43.158 waiting for app to run... 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@27 -- # waitforlisten 2034424 /root/vhost_test/vhost/0/rpc.sock 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@831 -- # '[' -z 2034424 ']' 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@835 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...' 00:07:43.158 Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock... 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.158 00:15:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:07:43.418 [2024-10-09 00:15:13.797000] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:07:43.418 [2024-10-09 00:15:13.797096] [ DPDK EAL parameters: nvmf --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034424 ] 00:07:43.418 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.676 [2024-10-09 00:15:14.100151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.676 [2024-10-09 00:15:14.292993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.676 [2024-10-09 00:15:14.293076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.676 [2024-10-09 00:15:14.293146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.676 [2024-10-09 00:15:14.293155] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@864 -- # return 0 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_transport -t VFIOUSER 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@30 -- # timing_exit vfio_user_start 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@22 -- # vm_muser_dir=/root/vhost_test/vms/1/muser 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@23 -- # rm -rf /root/vhost_test/vms/1/muser 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@24 -- # mkdir -p /root/vhost_test/vms/1/muser/domain/muser1/1 00:07:44.242 00:15:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@26 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:5e:00.0 00:07:47.522 Nvme0n1 00:07:47.522 00:15:17 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -s SPDK001 -a 00:07:47.522 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@28 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Nvme0n1 00:07:47.779 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@29 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@31 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@511 -- # xtrace_disable 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:07:48.038 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:07:48.038 INFO: Creating new VM in /root/vhost_test/vms/1 00:07:48.038 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:07:48.038 INFO: TASK MASK: 6-7 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@664 -- # local node_num=0 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@665 -- # local boot_disk_present=false 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:07:48.038 INFO: NUMA NODE: 0 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@670 -- # [[ -n '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # [[ -z '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@694 -- # IFS=, 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@695 -- # [[ -z '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@695 -- # disk_type=vfio_user 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@697 -- # case $disk_type in 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@751 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:07:48.038 INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@752 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl") 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@753 -- # [[ 1 == '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@773 -- # [[ -n '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@778 -- # (( 0 )) 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:07:48.038 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # cat 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@820 -- # echo 10100 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@821 -- # echo 10101 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@822 -- # echo 10102 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@825 -- # [[ -z '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10104 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 101 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@830 -- # [[ -z '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # [[ -z '' ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@32 -- # vm_run 1 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@836 -- # local run_all=false 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # local vms_to_run= 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@839 -- # getopts a-: optchar 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@849 -- # false 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@852 -- # shift 0 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@853 -- # for vm in "$@" 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@864 -- # vm_is_running 1 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:07:48.038 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@366 -- # return 1 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:07:48.039 INFO: running /root/vhost_test/vms/1/run.sh 00:07:48.039 00:15:18 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:07:48.297 Running VM in /root/vhost_test/vms/1 00:07:48.555 Waiting for QEMU pid file 00:07:48.555 [2024-10-09 00:15:19.171559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller 00:07:49.488 === qemu.log === 00:07:49.488 === qemu.log === 00:07:49.488 00:15:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@33 -- # vm_wait_for_boot 60 1 00:07:49.488 00:15:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@906 -- # assert_number 60 00:07:49.488 00:15:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:07:49.488 00:15:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@274 -- # return 0 00:07:49.488 00:15:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@908 -- # xtrace_disable 00:07:49.488 00:15:19 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:07:49.488 INFO: Waiting for VMs to boot 00:07:49.488 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:07:59.452 [2024-10-09 00:15:29.835055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:07:59.452 [2024-10-09 00:15:29.844097] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:07:59.452 [2024-10-09 00:15:29.848126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller 00:08:11.667 00:08:11.667 INFO: VM1 ready 00:08:11.667 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:11.667 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:11.667 INFO: all VMs ready 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@966 -- # return 0 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@35 -- # vm_exec 1 lsblk 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@332 -- # shift 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk 00:08:11.667 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:11.667 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS 00:08:11.667 sda 8:0 0 5G 0 disk 00:08:11.667 ├─sda1 8:1 0 1M 0 part 00:08:11.667 ├─sda2 8:2 0 1000M 0 part /boot 00:08:11.667 ├─sda3 8:3 0 100M 0 part /boot/efi 00:08:11.667 ├─sda4 8:4 0 4M 0 part 00:08:11.667 └─sda5 8:5 0 3.9G 0 part /home 00:08:11.667 / 00:08:11.667 zram0 252:0 0 946M 0 disk [SWAP] 00:08:11.667 nvme0n1 259:1 0 931.5G 0 disk 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@37 -- # vm_shutdown_all 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@482 -- # vm_list_all 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@459 -- # vms=() 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@459 -- # local vms 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@485 -- # vm_shutdown 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_is_running 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # vm_pid=2035374 00:08:11.667 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2035374 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 0 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:08:11.668 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@425 -- # set +e 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@332 -- # shift 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:08:11.668 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:08:11.668 INFO: VM1 is shutting down - wait a while to complete 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@428 -- # set -e 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:08:11.668 INFO: Waiting for VMs to shutdown... 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # vm_pid=2035374 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2035374 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 0 00:08:11.668 00:15:41 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:08:12.234 [2024-10-09 00:15:42.705231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:12.234 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:08:12.235 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:08:12.235 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # vm_pid=2035374 00:08:12.235 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2035374 00:08:12.235 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 0 00:08:12.235 00:15:42 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:08:13.188 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:08:13.188 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:08:13.188 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:08:13.188 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:13.188 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.188 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:13.189 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:13.189 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:13.189 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@366 -- # return 1 00:08:13.189 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:08:13.189 00:15:43 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:08:14.567 INFO: All VMs successfully shut down 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # return 0 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@40 -- # vm_setup --disk-type=vfio_user --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@511 -- # xtrace_disable 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:14.567 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:08:14.567 INFO: Creating new VM in /root/vhost_test/vms/1 00:08:14.567 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:08:14.567 INFO: TASK MASK: 6-7 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@664 -- # local node_num=0 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@665 -- # local boot_disk_present=false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:08:14.567 INFO: NUMA NODE: 0 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@670 -- # [[ -n '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@679 -- # [[ -z '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@694 -- # IFS=, 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@695 -- # [[ -z '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@695 -- # disk_type=vfio_user 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@697 -- # case $disk_type in 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@751 -- # notice 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl' 00:08:14.567 INFO: using socket /root/vhost_test/vms/1/domain/muser1/1/cntrl 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@752 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/$vm_num/muser/domain/muser$disk/$disk/cntrl") 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@753 -- # [[ 1 == '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@773 -- # [[ -n '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@778 -- # (( 0 )) 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:08:14.567 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # cat 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/1/muser/domain/muser1/1/cntrl 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@820 -- # echo 10100 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@821 -- # echo 10101 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@822 -- # echo 10102 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@825 -- # [[ -z '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@827 -- # echo 10104 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@828 -- # echo 101 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@830 -- # [[ -z '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@831 -- # [[ -z '' ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@41 -- # vm_run 1 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@836 -- # local run_all=false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@837 -- # local vms_to_run= 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@839 -- # getopts a-: optchar 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@849 -- # false 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@852 -- # shift 0 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@853 -- # for vm in "$@" 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.567 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@864 -- # vm_is_running 1 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@366 -- # return 1 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:08:14.568 INFO: running /root/vhost_test/vms/1/run.sh 00:08:14.568 00:15:44 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:08:14.568 Running VM in /root/vhost_test/vms/1 00:08:14.826 Waiting for QEMU pid file 00:08:14.826 [2024-10-09 00:15:45.430041] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller 00:08:15.762 === qemu.log === 00:08:15.762 === qemu.log === 00:08:15.762 00:15:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@42 -- # vm_wait_for_boot 60 1 00:08:15.762 00:15:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@906 -- # assert_number 60 00:08:15.762 00:15:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:08:15.762 00:15:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@274 -- # return 0 00:08:15.762 00:15:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@908 -- # xtrace_disable 00:08:15.762 00:15:46 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:15.762 INFO: Waiting for VMs to boot 00:08:15.762 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:08:25.734 [2024-10-09 00:15:56.153985] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:08:25.734 [2024-10-09 00:15:56.171098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: disabling controller 00:08:25.734 [2024-10-09 00:15:56.175123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /root/vhost_test/vms/1/muser/domain/muser1/1: enabling controller 00:08:37.937 00:08:37.937 INFO: VM1 ready 00:08:37.937 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:37.937 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:37.937 INFO: all VMs ready 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@966 -- # return 0 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@44 -- # vm_exec 1 lsblk 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@332 -- # shift 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 lsblk 00:08:37.937 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:37.937 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS 00:08:37.937 sda 8:0 0 5G 0 disk 00:08:37.937 ├─sda1 8:1 0 1M 0 part 00:08:37.937 ├─sda2 8:2 0 1000M 0 part /boot 00:08:37.937 ├─sda3 8:3 0 100M 0 part /boot/efi 00:08:37.937 ├─sda4 8:4 0 4M 0 part 00:08:37.937 └─sda5 8:5 0 3.9G 0 part /home 00:08:37.937 / 00:08:37.937 zram0 252:0 0 946M 0 disk [SWAP] 00:08:37.937 nvme0n1 259:1 0 931.5G 0 disk 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_ns nqn.2019-07.io.spdk:cnode1 1 00:08:37.937 00:16:07 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_subsystem_remove_listener nqn.2019-07.io.spdk:cnode1 -t vfiouser -a /root/vhost_test/vms/1/muser/domain/muser1/1 -s 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@53 -- # vm_exec 1 'echo 1 > /sys/class/nvme/nvme0/device/remove' 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@332 -- # shift 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'echo 1 > /sys/class/nvme/nvme0/device/remove' 00:08:37.937 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@55 -- # vm_shutdown_all 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@482 -- # vm_list_all 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@459 -- # vms=() 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@459 -- # local vms 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@485 -- # vm_shutdown 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@417 -- # vm_is_running 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # vm_pid=2039714 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2039714 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:08:37.937 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@425 -- # set +e 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@332 -- # shift 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:08:37.937 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:08:37.938 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:08:37.938 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:08:38.196 INFO: VM1 is shutting down - wait a while to complete 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@428 -- # set -e 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:08:38.196 INFO: Waiting for VMs to shutdown... 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # vm_pid=2039714 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2039714 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 0 00:08:38.196 00:16:08 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:08:39.129 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:08:39.130 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@370 -- # vm_pid=2039714 00:08:39.130 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2039714 00:08:39.130 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@373 -- # return 0 00:08:39.130 00:16:09 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@366 -- # return 1 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:08:40.503 00:16:10 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:08:41.437 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:08:41.437 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:08:41.437 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:08:41.437 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:08:41.437 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:08:41.438 INFO: All VMs successfully shut down 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@498 -- # return 0 00:08:41.438 00:16:11 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@58 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock nvmf_delete_subsystem nqn.2019-07.io.spdk:cnode1 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@60 -- # vhosttestfini 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']' 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/vfio_user_restart_vm.sh@1 -- # clean_vfio_user 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@6 -- # vm_kill_all 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@469 -- # local vm 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@470 -- # vm_list_all 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@459 -- # vms=() 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@459 -- # local vms 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@471 -- # vm_kill 1 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@435 -- # vm_num_is_valid 1 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/1 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@439 -- # return 0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@474 -- # rm -rf /root/vhost_test/vms 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- nvme/common.sh@7 -- # vhost_kill 0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@202 -- # local rc=0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]] 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@210 -- # local vhost_dir 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]] 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@220 -- # local vhost_pid 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@221 -- # vhost_pid=2034424 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 2034424) app' 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 2034424) app' 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 2034424) app' 00:08:42.941 INFO: killing vhost (PID 2034424) app 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@224 -- # kill -INT 2034424 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@61 -- # false 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@70 -- # shift 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit' 00:08:42.941 INFO: sent SIGINT to vhost app - waiting 60 seconds to exit 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i = 0 )) 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 2034424 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@228 -- # echo . 00:08:42.941 . 00:08:42.941 00:16:13 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@227 -- # kill -0 2034424 00:08:43.880 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (2034424) - No such process 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@231 -- # break 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@234 -- # kill -0 2034424 00:08:43.880 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (2034424) - No such process 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@239 -- # kill -0 2034424 00:08:43.880 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (2034424) - No such process 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@250 -- # timing_exit vhost_kill 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@252 -- # rm -rf /root/vhost_test/vhost/0 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- vhost/common.sh@254 -- # return 0 00:08:43.880 00:08:43.880 real 1m1.084s 00:08:43.880 user 3m58.782s 00:08:43.880 sys 0m2.073s 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.880 00:16:14 vfio_user_qemu.vfio_user_nvme_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:43.880 ************************************ 00:08:43.880 END TEST vfio_user_nvme_restart_vm 00:08:43.880 ************************************ 00:08:44.140 00:16:14 vfio_user_qemu -- vfio_user/vfio_user.sh@17 -- # run_test vfio_user_virtio_blk_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk 00:08:44.140 00:16:14 vfio_user_qemu -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.140 00:16:14 vfio_user_qemu -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.140 00:16:14 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:08:44.140 ************************************ 00:08:44.140 START TEST vfio_user_virtio_blk_restart_vm 00:08:44.140 ************************************ 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_blk 00:08:44.140 * Looking for test storage... 00:08:44.140 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1681 -- # lcov --version 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@344 -- # case "$op" in 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@345 -- # : 1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # decimal 1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # decimal 2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@353 -- # local d=2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@355 -- # echo 2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scripts/common.sh@368 -- # return 0 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.140 --rc genhtml_branch_coverage=1 00:08:44.140 --rc genhtml_function_coverage=1 00:08:44.140 --rc genhtml_legend=1 00:08:44.140 --rc geninfo_all_blocks=1 00:08:44.140 --rc geninfo_unexecuted_blocks=1 00:08:44.140 00:08:44.140 ' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.140 --rc genhtml_branch_coverage=1 00:08:44.140 --rc genhtml_function_coverage=1 00:08:44.140 --rc genhtml_legend=1 00:08:44.140 --rc geninfo_all_blocks=1 00:08:44.140 --rc geninfo_unexecuted_blocks=1 00:08:44.140 00:08:44.140 ' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.140 --rc genhtml_branch_coverage=1 00:08:44.140 --rc genhtml_function_coverage=1 00:08:44.140 --rc genhtml_legend=1 00:08:44.140 --rc geninfo_all_blocks=1 00:08:44.140 --rc geninfo_unexecuted_blocks=1 00:08:44.140 00:08:44.140 ' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:44.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.140 --rc genhtml_branch_coverage=1 00:08:44.140 --rc genhtml_function_coverage=1 00:08:44.140 --rc genhtml_legend=1 00:08:44.140 --rc geninfo_all_blocks=1 00:08:44.140 --rc geninfo_unexecuted_blocks=1 00:08:44.140 00:08:44.140 ' 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@6 -- # : 128 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@7 -- # : 512 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@6 -- # : false 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@9 -- # : qemu-img 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh 00:08:44.140 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2 00:08:44.141 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]' 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs)) 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1496 -- # local bdfs 00:08:44.400 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_blk 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_blk != virtio_blk ]] 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']' 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]] 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@17 -- # vfupid=2044801 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@18 -- # echo 2044801 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 2044801' 00:08:44.401 Process pid: 2044801 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...' 00:08:44.401 waiting for app to run... 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@22 -- # waitforlisten 2044801 /root/vhost_test/vhost/0/rpc.sock 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@831 -- # '[' -z 2044801 ']' 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@835 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...' 00:08:44.401 Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock... 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.401 00:16:14 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:44.401 [2024-10-09 00:16:14.974843] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:08:44.401 [2024-10-09 00:16:14.974935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044801 ] 00:08:44.401 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.660 [2024-10-09 00:16:15.270348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.918 [2024-10-09 00:16:15.475980] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.918 [2024-10-09 00:16:15.476054] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.918 [2024-10-09 00:16:15.476117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.918 [2024-10-09 00:16:15.476129] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@864 -- # return 0 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt 00:08:45.872 00:16:16 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:5e:00.0 00:08:49.156 Nvme0n1 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]] 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_blk_endpoint virtio.1 --bdev-name Nvme0n1 --num-queues=2 --qsize=512 --packed-ring 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@511 -- # xtrace_disable 00:08:49.156 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:49.156 INFO: Creating new VM in /root/vhost_test/vms/1 00:08:49.156 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:08:49.156 INFO: TASK MASK: 6-7 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@664 -- # local node_num=0 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@665 -- # local boot_disk_present=false 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:08:49.415 INFO: NUMA NODE: 0 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@670 -- # [[ -n '' ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # [[ -z '' ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@694 -- # IFS=, 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@695 -- # [[ -z '' ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@695 -- # disk_type=vfio_user_virtio 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@697 -- # case $disk_type in 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@759 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:08:49.415 INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@760 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk") 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@761 -- # [[ 1 == '' ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@773 -- # [[ -n '' ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@778 -- # (( 0 )) 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:08:49.415 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # cat 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@820 -- # echo 10100 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@821 -- # echo 10101 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@822 -- # echo 10102 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@825 -- # [[ -z '' ]] 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10104 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 101 00:08:49.415 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@830 -- # [[ -z '' ]] 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # [[ -z '' ]] 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@836 -- # local run_all=false 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # local vms_to_run= 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@839 -- # getopts a-: optchar 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@849 -- # false 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@852 -- # shift 0 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@853 -- # for vm in "$@" 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@864 -- # vm_is_running 1 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@366 -- # return 1 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:08:49.416 INFO: running /root/vhost_test/vms/1/run.sh 00:08:49.416 00:16:19 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:08:49.416 Running VM in /root/vhost_test/vms/1 00:08:49.674 [2024-10-09 00:16:20.167063] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully 00:08:49.675 Waiting for QEMU pid file 00:08:51.048 === qemu.log === 00:08:51.048 === qemu.log === 00:08:51.048 00:16:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1 00:08:51.048 00:16:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@906 -- # assert_number 60 00:08:51.048 00:16:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:08:51.048 00:16:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@274 -- # return 0 00:08:51.048 00:16:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@908 -- # xtrace_disable 00:08:51.048 00:16:21 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:08:51.049 INFO: Waiting for VMs to boot 00:08:51.049 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:09:12.961 00:09:12.961 INFO: VM1 ready 00:09:12.961 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:12.961 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:13.525 INFO: all VMs ready 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@966 -- # return 0 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks= 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7' 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:13.525 00:16:43 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7' 00:09:13.525 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@970 -- # local OPTIND optchar 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@971 -- # local readonly= 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@972 -- # local fio_bin= 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # getopts :-: optchar 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@974 -- # case "$optchar" in 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@976 -- # case "$OPTARG" in 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@977 -- # local fio_bin=/usr/src/fio-static/fio 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@973 -- # getopts :-: optchar 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@986 -- # shift 1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@987 -- # for vm_num in "$@" 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@988 -- # notice 'Starting fio server on VM1' 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1' 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1' 00:09:13.525 INFO: Starting fio server on VM1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@989 -- # [[ /usr/src/fio-static/fio != '' ]] 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@990 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio' 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:13.525 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio' 00:09:13.784 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@991 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:09:14.043 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart= 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_blk 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]] 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]] 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1028 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*' 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1029 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*' 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1029 -- # vm_exec 1 bash -s 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:14.043 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s 00:09:14.302 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1029 -- # SCSI_DISK=vda 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1031 -- # [[ -z vda ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=vda 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s vda 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/vda' 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/vda 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1046 -- # local arg 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1047 -- # local job_file= 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1048 -- # local fio_bin= 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1049 -- # vms=() 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1049 -- # local vms 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1050 -- # local out= 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1051 -- # local vm 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1052 -- # local run_server_mode=true 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1053 -- # local run_plugin_mode=false 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1054 -- # local fio_start_cmd 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1055 -- # local fio_output_format=normal 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1056 -- # local fio_gtod_reduce=false 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1057 -- # local wait_for_fio=true 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1062 -- # local fio_bin=/usr/src/fio-static/fio 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1061 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1065 -- # local out=/root/vhost_test/fio_results 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1066 -- # mkdir -p /root/vhost_test/fio_results 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1063 -- # vms+=("${arg#*=}") 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1085 -- # [[ -n /usr/src/fio-static/fio ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1085 -- # [[ ! -r /usr/src/fio-static/fio ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1090 -- # [[ -z /usr/src/fio-static/fio ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1094 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1099 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never ' 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1101 -- # local job_fname 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1102 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1102 -- # job_fname=default_integrity.job 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1103 -- # log_fname=default_integrity.log 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1104 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal ' 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1107 -- # for vm in "${vms[@]}" 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1108 -- # local vm_num=1 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1109 -- # local vmdisks=/dev/vda 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1111 -- # sed 's@filename=@filename=/dev/vda@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1112 -- # vm_exec 1 'cat > /root/default_integrity.job' 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:14.302 00:16:44 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job' 00:09:14.571 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1114 -- # false 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1118 -- # vm_exec 1 cat /root/default_integrity.job 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:14.571 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job 00:09:14.571 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:14.832 [global] 00:09:14.832 blocksize_range=4k-512k 00:09:14.832 iodepth=512 00:09:14.832 iodepth_batch=128 00:09:14.832 iodepth_low=256 00:09:14.832 ioengine=libaio 00:09:14.832 size=1G 00:09:14.832 io_size=4G 00:09:14.832 filename=/dev/vda 00:09:14.832 group_reporting 00:09:14.832 thread 00:09:14.832 numjobs=1 00:09:14.832 direct=1 00:09:14.832 rw=randwrite 00:09:14.832 do_verify=1 00:09:14.832 verify=md5 00:09:14.832 verify_backlog=1024 00:09:14.832 fsync_on_close=1 00:09:14.832 verify_state_save=0 00:09:14.832 [nvme-host] 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1120 -- # true 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # vm_fio_socket 1 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/fio_socket 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1121 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job ' 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1124 -- # true 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1140 -- # true 00:09:14.832 00:16:45 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1154 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job 00:09:24.800 00:16:54 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1155 -- # sleep 1 00:09:24.800 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1157 -- # [[ normal == \j\s\o\n ]] 00:09:24.800 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1165 -- # [[ ! -n '' ]] 00:09:24.800 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1166 -- # cat /root/vhost_test/fio_results/default_integrity.log 00:09:24.800 hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1 00:09:24.800 nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512 00:09:24.800 Starting 1 thread 00:09:24.800 00:09:24.800 nvme-host: (groupid=0, jobs=1): err= 0: pid=946: Wed Oct 9 00:16:54 2024 00:09:24.800 read: IOPS=1593, BW=267MiB/s (280MB/s)(2048MiB/7661msec) 00:09:24.800 slat (usec): min=28, max=20158, avg=1905.42, stdev=3552.82 00:09:24.800 clat (msec): min=7, max=286, avg=109.39, stdev=58.59 00:09:24.800 lat (msec): min=7, max=287, avg=111.29, stdev=58.01 00:09:24.800 clat percentiles (msec): 00:09:24.800 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 41], 20.00th=[ 64], 00:09:24.800 | 30.00th=[ 73], 40.00th=[ 87], 50.00th=[ 102], 60.00th=[ 116], 00:09:24.800 | 70.00th=[ 136], 80.00th=[ 159], 90.00th=[ 194], 95.00th=[ 222], 00:09:24.800 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:09:24.800 | 99.99th=[ 288] 00:09:24.800 write: IOPS=1701, BW=285MiB/s (299MB/s)(2048MiB/7174msec); 0 zone resets 00:09:24.800 slat (usec): min=196, max=59621, avg=17862.37, stdev=12212.17 00:09:24.800 clat (msec): min=6, max=252, avg=100.95, stdev=52.78 00:09:24.800 lat (msec): min=6, max=280, avg=118.81, stdev=55.11 00:09:24.800 clat percentiles (msec): 00:09:24.800 | 1.00th=[ 10], 5.00th=[ 19], 10.00th=[ 31], 20.00th=[ 57], 00:09:24.800 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 95], 60.00th=[ 109], 00:09:24.800 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 174], 95.00th=[ 199], 00:09:24.800 | 99.00th=[ 232], 99.50th=[ 253], 99.90th=[ 253], 99.95th=[ 253], 00:09:24.800 | 99.99th=[ 253] 00:09:24.800 bw ( KiB/s): min=18096, max=472048, per=95.67%, avg=279658.07, stdev=121410.68, samples=15 00:09:24.800 iops : min= 92, max= 2052, avg=1628.00, stdev=592.87, samples=15 00:09:24.800 lat (msec) : 10=0.66%, 20=4.66%, 50=9.58%, 100=36.75%, 250=46.99% 00:09:24.800 lat (msec) : 500=1.36% 00:09:24.800 cpu : usr=94.88%, sys=1.48%, ctx=514, majf=0, minf=34 00:09:24.800 IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1% 00:09:24.800 submit : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6% 00:09:24.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.800 issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.800 latency : target=0, window=0, percentile=100.00%, depth=512 00:09:24.800 00:09:24.800 Run status group 0 (all jobs): 00:09:24.800 READ: bw=267MiB/s (280MB/s), 267MiB/s-267MiB/s (280MB/s-280MB/s), io=2048MiB (2147MB), run=7661-7661msec 00:09:24.800 WRITE: bw=285MiB/s (299MB/s), 285MiB/s-285MiB/s (299MB/s-299MB/s), io=2048MiB (2147MB), run=7174-7174msec 00:09:24.800 00:09:24.800 Disk stats (read/write): 00:09:24.800 vda: ios=11938/12141, merge=51/72, ticks=133377/101885, in_queue=235263, util=34.17% 00:09:24.800 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...' 00:09:24.800 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...' 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...' 00:09:25.058 INFO: Shutting down virtual machine... 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@482 -- # vm_list_all 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@459 -- # vms=() 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@459 -- # local vms 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@485 -- # vm_shutdown 1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_is_running 1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # vm_pid=2045750 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2045750 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 0 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:25.058 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:09:25.059 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@425 -- # set +e 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:25.059 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:09:25.059 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:25.317 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:09:25.318 INFO: VM1 is shutting down - wait a while to complete 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@428 -- # set -e 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:09:25.318 INFO: Waiting for VMs to shutdown... 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # vm_pid=2045750 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2045750 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 0 00:09:25.318 00:16:55 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # vm_pid=2045750 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2045750 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 0 00:09:26.251 00:16:56 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:09:27.185 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:09:27.185 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:09:27.185 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:09:27.185 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@366 -- # return 1 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:09:27.186 00:16:57 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:09:28.559 INFO: All VMs successfully shut down 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # return 0 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@511 -- # xtrace_disable 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:28.559 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:09:28.559 INFO: Creating new VM in /root/vhost_test/vms/1 00:09:28.559 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:09:28.559 INFO: TASK MASK: 6-7 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@664 -- # local node_num=0 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@665 -- # local boot_disk_present=false 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:09:28.559 INFO: NUMA NODE: 0 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@670 -- # [[ -n '' ]] 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@679 -- # [[ -z '' ]] 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@694 -- # IFS=, 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@695 -- # [[ -z '' ]] 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@695 -- # disk_type=vfio_user_virtio 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@697 -- # case $disk_type in 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@759 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:09:28.559 INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@760 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk") 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@761 -- # [[ 1 == '' ]] 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@773 -- # [[ -n '' ]] 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@778 -- # (( 0 )) 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:28.559 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:09:28.560 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # cat 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@820 -- # echo 10100 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@821 -- # echo 10101 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@822 -- # echo 10102 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@825 -- # [[ -z '' ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@827 -- # echo 10104 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@828 -- # echo 101 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@830 -- # [[ -z '' ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@831 -- # [[ -z '' ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@836 -- # local run_all=false 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@837 -- # local vms_to_run= 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@839 -- # getopts a-: optchar 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@849 -- # false 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@852 -- # shift 0 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@853 -- # for vm in "$@" 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@864 -- # vm_is_running 1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@366 -- # return 1 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:09:28.560 INFO: running /root/vhost_test/vms/1/run.sh 00:09:28.560 00:16:58 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:09:28.560 Running VM in /root/vhost_test/vms/1 00:09:28.560 [2024-10-09 00:16:59.092279] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully 00:09:28.560 Waiting for QEMU pid file 00:09:29.950 === qemu.log === 00:09:29.950 === qemu.log === 00:09:29.950 00:17:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1 00:09:29.950 00:17:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@906 -- # assert_number 60 00:09:29.950 00:17:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:09:29.950 00:17:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@274 -- # return 0 00:09:29.950 00:17:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@908 -- # xtrace_disable 00:09:29.950 00:17:00 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:29.950 INFO: Waiting for VMs to boot 00:09:29.950 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:09:51.875 00:09:51.875 INFO: VM1 ready 00:09:51.875 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:51.875 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:52.135 INFO: all VMs ready 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@966 -- # return 0 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart= 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_blk 1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_blk == \v\i\r\t\i\o\_\s\c\s\i ]] 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@26 -- # [[ virtio_blk == \v\i\r\t\i\o\_\b\l\k ]] 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@27 -- # vm_check_blk_location 1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1028 -- # local 'script=shopt -s nullglob; cd /sys/block; echo vd*' 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1029 -- # echo 'shopt -s nullglob; cd /sys/block; echo vd*' 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1029 -- # vm_exec 1 bash -s 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:52.135 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s 00:09:52.135 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1029 -- # SCSI_DISK=vda 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@1031 -- # [[ -z vda ]] 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=vda 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ vda != \v\d\a ]] 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...' 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...' 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...' 00:09:52.393 INFO: Shutting down virtual machine... 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@482 -- # vm_list_all 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@459 -- # vms=() 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@459 -- # local vms 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@485 -- # vm_shutdown 1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@417 -- # vm_is_running 1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:09:52.393 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # vm_pid=2052420 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2052420 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 0 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:09:52.394 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@425 -- # set +e 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@332 -- # shift 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:09:52.394 00:17:22 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:09:52.394 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:09:52.652 INFO: VM1 is shutting down - wait a while to complete 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@428 -- # set -e 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:09:52.652 INFO: Waiting for VMs to shutdown... 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@370 -- # vm_pid=2052420 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2052420 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@373 -- # return 0 00:09:52.652 00:17:23 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@302 -- # return 0 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@366 -- # return 1 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:09:53.588 00:17:24 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:09:54.988 INFO: All VMs successfully shut down 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@498 -- # return 0 00:09:54.988 00:17:25 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0 00:09:54.988 [2024-10-09 00:17:25.387929] vfu_virtio_blk.c: 384:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE) 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@202 -- # local rc=0 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]] 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@210 -- # local vhost_dir 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:09:56.436 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]] 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@220 -- # local vhost_pid 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@221 -- # vhost_pid=2044801 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 2044801) app' 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 2044801) app' 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 2044801) app' 00:09:56.437 INFO: killing vhost (PID 2044801) app 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@224 -- # kill -INT 2044801 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@61 -- # false 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@70 -- # shift 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit' 00:09:56.437 INFO: sent SIGINT to vhost app - waiting 60 seconds to exit 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i = 0 )) 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 2044801 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo . 00:09:56.437 . 00:09:56.437 00:17:26 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:09:57.384 00:17:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:09:57.384 00:17:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:09:57.384 00:17:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 2044801 00:09:57.384 00:17:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo . 00:09:57.384 . 00:09:57.384 00:17:27 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:09:58.320 00:17:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:09:58.320 00:17:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:09:58.320 00:17:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 2044801 00:09:58.320 00:17:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@228 -- # echo . 00:09:58.320 . 00:09:58.320 00:17:28 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@227 -- # kill -0 2044801 00:09:59.271 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (2044801) - No such process 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@231 -- # break 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@234 -- # kill -0 2044801 00:09:59.271 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (2044801) - No such process 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@239 -- # kill -0 2044801 00:09:59.271 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (2044801) - No such process 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@250 -- # timing_exit vhost_kill 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@252 -- # rm -rf /root/vhost_test/vhost/0 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@254 -- # return 0 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']' 00:09:59.271 00:09:59.271 real 1m15.279s 00:09:59.271 user 4m56.881s 00:09:59.271 sys 0m2.185s 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_blk_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:59.271 ************************************ 00:09:59.271 END TEST vfio_user_virtio_blk_restart_vm 00:09:59.271 ************************************ 00:09:59.271 00:17:29 vfio_user_qemu -- vfio_user/vfio_user.sh@18 -- # run_test vfio_user_virtio_scsi_restart_vm /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi 00:09:59.271 00:17:29 vfio_user_qemu -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.271 00:17:29 vfio_user_qemu -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.271 00:17:29 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:09:59.271 ************************************ 00:09:59.271 START TEST vfio_user_virtio_scsi_restart_vm 00:09:59.271 ************************************ 00:09:59.271 00:17:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh virtio_scsi 00:09:59.532 * Looking for test storage... 00:09:59.532 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:09:59.532 00:17:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:59.532 00:17:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1681 -- # lcov --version 00:09:59.532 00:17:29 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@344 -- # case "$op" in 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@345 -- # : 1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # decimal 1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # decimal 2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@353 -- # local d=2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@355 -- # echo 2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scripts/common.sh@368 -- # return 0 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.532 --rc genhtml_branch_coverage=1 00:09:59.532 --rc genhtml_function_coverage=1 00:09:59.532 --rc genhtml_legend=1 00:09:59.532 --rc geninfo_all_blocks=1 00:09:59.532 --rc geninfo_unexecuted_blocks=1 00:09:59.532 00:09:59.532 ' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.532 --rc genhtml_branch_coverage=1 00:09:59.532 --rc genhtml_function_coverage=1 00:09:59.532 --rc genhtml_legend=1 00:09:59.532 --rc geninfo_all_blocks=1 00:09:59.532 --rc geninfo_unexecuted_blocks=1 00:09:59.532 00:09:59.532 ' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.532 --rc genhtml_branch_coverage=1 00:09:59.532 --rc genhtml_function_coverage=1 00:09:59.532 --rc genhtml_legend=1 00:09:59.532 --rc geninfo_all_blocks=1 00:09:59.532 --rc geninfo_unexecuted_blocks=1 00:09:59.532 00:09:59.532 ' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.532 --rc genhtml_branch_coverage=1 00:09:59.532 --rc genhtml_function_coverage=1 00:09:59.532 --rc genhtml_legend=1 00:09:59.532 --rc geninfo_all_blocks=1 00:09:59.532 --rc geninfo_unexecuted_blocks=1 00:09:59.532 00:09:59.532 ' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@6 -- # : 128 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@7 -- # : 512 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@6 -- # : false 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@7 -- # : /root/vhost_test 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@9 -- # : qemu-img 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_restart_vm.sh 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@2 -- # vhost_0_main_core=0 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:09:59.532 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # check_cgroup 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@10 -- # echo 2 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]' 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # bdfs=($(get_nvme_bdfs)) 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@14 -- # get_nvme_bdfs 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1496 -- # local bdfs 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:59.533 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # get_vhost_dir 0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@15 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@17 -- # virtio_type=virtio_scsi 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_blk ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@18 -- # [[ virtio_scsi != virtio_scsi ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@31 -- # vhosttestinit 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@37 -- # '[' '' == iso ']' 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@33 -- # vfu_tgt_run 0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@6 -- # local vhost_name=0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # get_vhost_dir 0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@17 -- # vfupid=2057703 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@18 -- # echo 2057703 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@20 -- # echo 'Process pid: 2057703' 00:09:59.793 Process pid: 2057703 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@21 -- # echo 'waiting for app to run...' 00:09:59.793 waiting for app to run... 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@22 -- # waitforlisten 2057703 /root/vhost_test/vhost/0/rpc.sock 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@831 -- # '[' -z 2057703 ']' 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@835 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...' 00:09:59.793 Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock... 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.793 00:17:30 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:09:59.793 [2024-10-09 00:17:30.314705] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:09:59.793 [2024-10-09 00:17:30.314800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057703 ] 00:09:59.793 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.052 [2024-10-09 00:17:30.618656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.310 [2024-10-09 00:17:30.820836] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.310 [2024-10-09 00:17:30.820911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.310 [2024-10-09 00:17:30.820969] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.311 [2024-10-09 00:17:30.820999] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@864 -- # return 0 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@35 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@36 -- # rm -rf /root/vhost_test/vms/vfu_tgt 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@37 -- # mkdir -p /root/vhost_test/vms/vfu_tgt 00:10:01.247 00:17:31 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@39 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:5e:00.0 00:10:04.542 Nvme0n1 00:10:04.542 00:17:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@42 -- # disk_no=1 00:10:04.542 00:17:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@43 -- # vm_num=1 00:10:04.542 00:17:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt 00:10:04.542 00:17:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@46 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\b\l\k ]] 00:10:04.542 00:17:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@48 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]] 00:10:04.542 00:17:34 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_scsi_endpoint virtio.1 --num-io-queues=2 --qsize=512 --packed-ring 00:10:04.542 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_scsi_add_target virtio.1 --scsi-target-num=0 --bdev-name Nvme0n1 00:10:04.801 [2024-10-09 00:17:35.230969] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: virtio.1: added SCSI target 0 using bdev 'Nvme0n1' 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@53 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@511 -- # xtrace_disable 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:10:04.801 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:10:04.801 INFO: Creating new VM in /root/vhost_test/vms/1 00:10:04.801 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:10:04.801 INFO: TASK MASK: 6-7 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@664 -- # local node_num=0 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@665 -- # local boot_disk_present=false 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:10:04.801 INFO: NUMA NODE: 0 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@670 -- # [[ -n '' ]] 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # [[ -z '' ]] 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@694 -- # IFS=, 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@695 -- # [[ -z '' ]] 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@695 -- # disk_type=vfio_user_virtio 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@697 -- # case $disk_type in 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@759 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:10:04.801 INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1 00:10:04.801 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@760 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk") 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@761 -- # [[ 1 == '' ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@773 -- # [[ -n '' ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@778 -- # (( 0 )) 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:10:04.802 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # cat 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@820 -- # echo 10100 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@821 -- # echo 10101 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@822 -- # echo 10102 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@825 -- # [[ -z '' ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10104 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 101 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@830 -- # [[ -z '' ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # [[ -z '' ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@54 -- # vm_run 1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@836 -- # local run_all=false 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # local vms_to_run= 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@839 -- # getopts a-: optchar 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@849 -- # false 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@852 -- # shift 0 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@853 -- # for vm in "$@" 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@864 -- # vm_is_running 1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@366 -- # return 1 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:10:04.802 INFO: running /root/vhost_test/vms/1/run.sh 00:10:04.802 00:17:35 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:10:04.802 Running VM in /root/vhost_test/vms/1 00:10:05.060 [2024-10-09 00:17:35.608518] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully 00:10:05.060 Waiting for QEMU pid file 00:10:06.443 === qemu.log === 00:10:06.443 === qemu.log === 00:10:06.443 00:17:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@55 -- # vm_wait_for_boot 60 1 00:10:06.443 00:17:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@906 -- # assert_number 60 00:10:06.443 00:17:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:10:06.443 00:17:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@274 -- # return 0 00:10:06.443 00:17:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@908 -- # xtrace_disable 00:10:06.443 00:17:36 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:10:06.443 INFO: Waiting for VMs to boot 00:10:06.443 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:10:16.424 [2024-10-09 00:17:46.826505] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:28.630 00:10:28.630 INFO: VM1 ready 00:10:28.630 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:28.630 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:28.630 INFO: all VMs ready 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@966 -- # return 0 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@58 -- # fio_bin=--fio-bin=/usr/src/fio-static/fio 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@59 -- # fio_disks= 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@60 -- # qemu_mask_param=VM_1_qemu_mask 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@62 -- # host_name=VM-1-6-7 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@63 -- # vm_exec 1 'hostname VM-1-6-7' 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:28.889 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'hostname VM-1-6-7' 00:10:28.890 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@64 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@970 -- # local OPTIND optchar 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@971 -- # local readonly= 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@972 -- # local fio_bin= 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # getopts :-: optchar 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@974 -- # case "$optchar" in 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@976 -- # case "$OPTARG" in 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@977 -- # local fio_bin=/usr/src/fio-static/fio 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@973 -- # getopts :-: optchar 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@986 -- # shift 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@987 -- # for vm_num in "$@" 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@988 -- # notice 'Starting fio server on VM1' 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1' 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1' 00:10:28.890 INFO: Starting fio server on VM1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@989 -- # [[ /usr/src/fio-static/fio != '' ]] 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@990 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio' 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:28.890 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio' 00:10:29.148 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:29.148 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@991 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:10:29.148 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:29.148 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.148 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.148 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:29.149 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:29.407 00:17:59 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:10:29.407 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@66 -- # disks_before_restart= 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@67 -- # get_disks virtio_scsi 1 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]] 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1007 -- # local 'script=shopt -s nullglob; 00:10:29.407 for entry in /sys/block/sd*; do 00:10:29.407 disk_type="$(cat $entry/device/vendor)"; 00:10:29.407 if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then 00:10:29.407 fname=$(basename $entry); 00:10:29.407 echo -n " $fname"; 00:10:29.407 fi; 00:10:29.407 done' 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1009 -- # echo 'shopt -s nullglob; 00:10:29.407 for entry in /sys/block/sd*; do 00:10:29.407 disk_type="$(cat $entry/device/vendor)"; 00:10:29.407 if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then 00:10:29.407 fname=$(basename $entry); 00:10:29.407 echo -n " $fname"; 00:10:29.407 fi; 00:10:29.407 done' 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1009 -- # vm_exec 1 bash -s 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:29.407 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:29.408 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s 00:10:29.667 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1009 -- # SCSI_DISK=' sdb' 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1011 -- # [[ -z sdb ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@68 -- # disks_before_restart=' sdb' 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # printf :/dev/%s sdb 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@70 -- # fio_disks=' --vm=1:/dev/sdb' 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@71 -- # job_file=default_integrity.job 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@74 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job --out=/root/vhost_test/fio_results --vm=1:/dev/sdb 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1046 -- # local arg 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1047 -- # local job_file= 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1048 -- # local fio_bin= 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1049 -- # vms=() 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1049 -- # local vms 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1050 -- # local out= 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1051 -- # local vm 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1052 -- # local run_server_mode=true 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1053 -- # local run_plugin_mode=false 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1054 -- # local fio_start_cmd 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1055 -- # local fio_output_format=normal 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1056 -- # local fio_gtod_reduce=false 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1057 -- # local wait_for_fio=true 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1062 -- # local fio_bin=/usr/src/fio-static/fio 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1061 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1065 -- # local out=/root/vhost_test/fio_results 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1066 -- # mkdir -p /root/vhost_test/fio_results 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1059 -- # for arg in "$@" 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1060 -- # case "$arg" in 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1063 -- # vms+=("${arg#*=}") 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1085 -- # [[ -n /usr/src/fio-static/fio ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1085 -- # [[ ! -r /usr/src/fio-static/fio ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1090 -- # [[ -z /usr/src/fio-static/fio ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1094 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1099 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never ' 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1101 -- # local job_fname 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1102 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1102 -- # job_fname=default_integrity.job 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1103 -- # log_fname=default_integrity.log 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1104 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal ' 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1107 -- # for vm in "${vms[@]}" 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1108 -- # local vm_num=1 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1109 -- # local vmdisks=/dev/sdb 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1111 -- # sed 's@filename=@filename=/dev/sdb@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_integrity.job 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1112 -- # vm_exec 1 'cat > /root/default_integrity.job' 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:29.667 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_integrity.job' 00:10:29.926 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1114 -- # false 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1118 -- # vm_exec 1 cat /root/default_integrity.job 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:29.926 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_integrity.job 00:10:30.183 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:30.183 [global] 00:10:30.183 blocksize_range=4k-512k 00:10:30.183 iodepth=512 00:10:30.183 iodepth_batch=128 00:10:30.183 iodepth_low=256 00:10:30.183 ioengine=libaio 00:10:30.183 size=1G 00:10:30.183 io_size=4G 00:10:30.183 filename=/dev/sdb 00:10:30.183 group_reporting 00:10:30.183 thread 00:10:30.183 numjobs=1 00:10:30.183 direct=1 00:10:30.183 rw=randwrite 00:10:30.183 do_verify=1 00:10:30.183 verify=md5 00:10:30.183 verify_backlog=1024 00:10:30.183 fsync_on_close=1 00:10:30.183 verify_state_save=0 00:10:30.183 [nvme-host] 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1120 -- # true 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # vm_fio_socket 1 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@319 -- # vm_num_is_valid 1 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/fio_socket 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1121 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_integrity.job ' 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1124 -- # true 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1140 -- # true 00:10:30.183 00:18:00 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1154 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_integrity.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_integrity.job 00:10:31.558 [2024-10-09 00:18:01.826915] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:35.746 [2024-10-09 00:18:05.695796] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:35.746 [2024-10-09 00:18:05.935549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.034 [2024-10-09 00:18:09.541982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.291 [2024-10-09 00:18:09.795935] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:39.291 00:18:09 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1155 -- # sleep 1 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1157 -- # [[ normal == \j\s\o\n ]] 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1165 -- # [[ ! -n '' ]] 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1166 -- # cat /root/vhost_test/fio_results/default_integrity.log 00:10:40.226 hostname=VM-1-6-7, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1 00:10:40.226 nvme-host: (g=0): rw=randwrite, bs=(R) 4096B-512KiB, (W) 4096B-512KiB, (T) 4096B-512KiB, ioengine=libaio, iodepth=512 00:10:40.226 Starting 1 thread 00:10:40.226 00:10:40.226 nvme-host: (groupid=0, jobs=1): err= 0: pid=954: Wed Oct 9 00:18:09 2024 00:10:40.226 read: IOPS=1579, BW=265MiB/s (278MB/s)(2048MiB/7730msec) 00:10:40.226 slat (usec): min=36, max=22420, avg=2416.40, stdev=4187.31 00:10:40.226 clat (msec): min=7, max=282, avg=110.39, stdev=58.09 00:10:40.226 lat (msec): min=8, max=284, avg=112.80, stdev=57.65 00:10:40.226 clat percentiles (msec): 00:10:40.226 | 1.00th=[ 11], 5.00th=[ 21], 10.00th=[ 43], 20.00th=[ 65], 00:10:40.226 | 30.00th=[ 75], 40.00th=[ 89], 50.00th=[ 103], 60.00th=[ 116], 00:10:40.226 | 70.00th=[ 136], 80.00th=[ 161], 90.00th=[ 194], 95.00th=[ 222], 00:10:40.226 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 284], 00:10:40.226 | 99.99th=[ 284] 00:10:40.226 write: IOPS=1689, BW=283MiB/s (297MB/s)(2048MiB/7227msec); 0 zone resets 00:10:40.226 slat (usec): min=233, max=62103, avg=18054.84, stdev=12282.89 00:10:40.226 clat (msec): min=6, max=249, avg=101.33, stdev=52.67 00:10:40.226 lat (msec): min=7, max=275, avg=119.38, stdev=55.13 00:10:40.226 clat percentiles (msec): 00:10:40.226 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 30], 20.00th=[ 56], 00:10:40.226 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 96], 60.00th=[ 109], 00:10:40.226 | 70.00th=[ 126], 80.00th=[ 144], 90.00th=[ 174], 95.00th=[ 199], 00:10:40.226 | 99.00th=[ 232], 99.50th=[ 249], 99.90th=[ 249], 99.95th=[ 249], 00:10:40.226 | 99.99th=[ 249] 00:10:40.226 bw ( KiB/s): min=20512, max=472048, per=96.36%, avg=279620.27, stdev=123367.37, samples=15 00:10:40.226 iops : min= 102, max= 2048, avg=1627.73, stdev=638.05, samples=15 00:10:40.226 lat (msec) : 10=0.90%, 20=4.25%, 50=9.36%, 100=36.71%, 250=47.78% 00:10:40.226 lat (msec) : 500=1.00% 00:10:40.226 cpu : usr=93.82%, sys=1.75%, ctx=873, majf=0, minf=34 00:10:40.226 IO depths : 1=0.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.5%, >=64=99.1% 00:10:40.226 submit : 0=0.0%, 4=0.0%, 8=1.2%, 16=0.0%, 32=0.0%, 64=19.2%, >=64=79.6% 00:10:40.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.226 issued rwts: total=12208,12208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.226 latency : target=0, window=0, percentile=100.00%, depth=512 00:10:40.226 00:10:40.226 Run status group 0 (all jobs): 00:10:40.226 READ: bw=265MiB/s (278MB/s), 265MiB/s-265MiB/s (278MB/s-278MB/s), io=2048MiB (2147MB), run=7730-7730msec 00:10:40.226 WRITE: bw=283MiB/s (297MB/s), 283MiB/s-283MiB/s (297MB/s-297MB/s), io=2048MiB (2147MB), run=7227-7227msec 00:10:40.226 00:10:40.226 Disk stats (read/write): 00:10:40.226 sdb: ios=11872/12173, merge=56/85, ticks=133896/103017, in_queue=236914, util=35.11% 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@77 -- # notice 'Shutting down virtual machine...' 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...' 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:40.226 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...' 00:10:40.227 INFO: Shutting down virtual machine... 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@78 -- # vm_shutdown_all 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@482 -- # vm_list_all 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@459 -- # vms=() 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@459 -- # local vms 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@485 -- # vm_shutdown 1 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:10:40.227 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_is_running 1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # vm_pid=2058506 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2058506 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 0 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:10:40.486 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@425 -- # set +e 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:10:40.486 00:18:10 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:10:40.486 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:10:40.745 Connection to 127.0.0.1 closed by remote host. 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@426 -- # true 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:10:40.745 INFO: VM1 is shutting down - wait a while to complete 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@428 -- # set -e 00:10:40.745 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:10:40.746 INFO: Waiting for VMs to shutdown... 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # vm_pid=2058506 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2058506 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 0 00:10:40.746 00:18:11 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # vm_pid=2058506 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2058506 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 0 00:10:41.682 00:18:12 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@366 -- # return 1 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:10:43.063 00:18:13 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:10:43.997 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:10:43.998 INFO: All VMs successfully shut down 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # return 0 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@81 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@511 -- # xtrace_disable 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:10:43.998 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:10:43.998 INFO: Creating new VM in /root/vhost_test/vms/1 00:10:43.998 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:10:43.998 INFO: TASK MASK: 6-7 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@664 -- # local node_num=0 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@665 -- # local boot_disk_present=false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:10:43.998 INFO: NUMA NODE: 0 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@670 -- # [[ -n '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@679 -- # [[ -z '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@694 -- # IFS=, 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@695 -- # [[ -z '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@695 -- # disk_type=vfio_user_virtio 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@697 -- # case $disk_type in 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@759 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:10:43.998 INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@760 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk") 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@761 -- # [[ 1 == '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@773 -- # [[ -n '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@778 -- # (( 0 )) 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:10:43.998 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # cat 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@820 -- # echo 10100 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@821 -- # echo 10101 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@822 -- # echo 10102 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@825 -- # [[ -z '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@827 -- # echo 10104 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@828 -- # echo 101 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@830 -- # [[ -z '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@831 -- # [[ -z '' ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@82 -- # vm_run 1 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@836 -- # local run_all=false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@837 -- # local vms_to_run= 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@839 -- # getopts a-: optchar 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@849 -- # false 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@852 -- # shift 0 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@853 -- # for vm in "$@" 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@864 -- # vm_is_running 1 00:10:43.998 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@366 -- # return 1 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:10:43.999 INFO: running /root/vhost_test/vms/1/run.sh 00:10:43.999 00:18:14 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:10:43.999 Running VM in /root/vhost_test/vms/1 00:10:44.257 [2024-10-09 00:18:14.638018] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully 00:10:44.257 Waiting for QEMU pid file 00:10:45.190 === qemu.log === 00:10:45.190 === qemu.log === 00:10:45.190 00:18:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@83 -- # vm_wait_for_boot 60 1 00:10:45.190 00:18:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@906 -- # assert_number 60 00:10:45.190 00:18:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:10:45.190 00:18:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@274 -- # return 0 00:10:45.190 00:18:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@908 -- # xtrace_disable 00:10:45.190 00:18:15 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:10:45.190 INFO: Waiting for VMs to boot 00:10:45.190 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:10:55.237 [2024-10-09 00:18:25.780999] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:07.459 00:11:07.459 INFO: VM1 ready 00:11:07.459 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:11:07.459 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:11:07.716 INFO: all VMs ready 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@966 -- # return 0 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@86 -- # disks_after_restart= 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@87 -- # get_disks virtio_scsi 1 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@24 -- # [[ virtio_scsi == \v\i\r\t\i\o\_\s\c\s\i ]] 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@25 -- # vm_check_scsi_location 1 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1007 -- # local 'script=shopt -s nullglob; 00:11:07.716 for entry in /sys/block/sd*; do 00:11:07.716 disk_type="$(cat $entry/device/vendor)"; 00:11:07.716 if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then 00:11:07.716 fname=$(basename $entry); 00:11:07.716 echo -n " $fname"; 00:11:07.716 fi; 00:11:07.716 done' 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1009 -- # echo 'shopt -s nullglob; 00:11:07.716 for entry in /sys/block/sd*; do 00:11:07.716 disk_type="$(cat $entry/device/vendor)"; 00:11:07.716 if [[ $disk_type == INTEL* ]] || [[ $disk_type == RAWSCSI* ]] || [[ $disk_type == LIO-ORG* ]]; then 00:11:07.716 fname=$(basename $entry); 00:11:07.716 echo -n " $fname"; 00:11:07.716 fi; 00:11:07.716 done' 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1009 -- # vm_exec 1 bash -s 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:11:07.716 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:11:07.717 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 bash -s 00:11:07.717 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:11:07.983 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1009 -- # SCSI_DISK=' sdb' 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@1011 -- # [[ -z sdb ]] 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@88 -- # disks_after_restart=' sdb' 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@90 -- # [[ sdb != \ \s\d\b ]] 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@96 -- # notice 'Shutting down virtual machine...' 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...' 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...' 00:11:07.984 INFO: Shutting down virtual machine... 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@97 -- # vm_shutdown_all 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@482 -- # vm_list_all 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@459 -- # vms=() 00:11:07.984 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@459 -- # local vms 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@485 -- # vm_shutdown 1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@417 -- # vm_is_running 1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # vm_pid=2065639 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2065639 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 0 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:11:07.985 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@425 -- # set +e 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:11:07.985 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@331 -- # local vm_num=1 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@332 -- # shift 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:11:07.986 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:11:07.986 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:11:08.244 INFO: VM1 is shutting down - wait a while to complete 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@428 -- # set -e 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:11:08.244 INFO: Waiting for VMs to shutdown... 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@369 -- # local vm_pid 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@370 -- # vm_pid=2065639 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@372 -- # /bin/kill -0 2065639 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@373 -- # return 0 00:11:08.244 00:18:38 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # vm_is_running 1 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@302 -- # return 0 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:11:09.177 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@366 -- # return 1 00:11:09.178 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:11:09.178 00:18:39 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@493 -- # sleep 1 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:11:10.549 INFO: All VMs successfully shut down 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@498 -- # return 0 00:11:10.549 00:18:40 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@99 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock bdev_nvme_detach_controller Nvme0 00:11:10.549 [2024-10-09 00:18:40.983951] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1) received event(SPDK_BDEV_EVENT_REMOVE) 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@101 -- # vhost_kill 0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@202 -- # local rc=0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@203 -- # local vhost_name=0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@205 -- # [[ -z 0 ]] 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@210 -- # local vhost_dir 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # get_vhost_dir 0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@105 -- # local vhost_name=0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:11:11.935 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]] 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@219 -- # timing_enter vhost_kill 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@220 -- # local vhost_pid 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@221 -- # vhost_pid=2057703 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@222 -- # notice 'killing vhost (PID 2057703) app' 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 2057703) app' 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 2057703) app' 00:11:11.936 INFO: killing vhost (PID 2057703) app 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@224 -- # kill -INT 2057703 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@60 -- # local verbose_out 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@61 -- # false 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@62 -- # verbose_out= 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@69 -- # local msg_type=INFO 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@70 -- # shift 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit' 00:11:11.936 INFO: sent SIGINT to vhost app - waiting 60 seconds to exit 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i = 0 )) 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 2057703 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo . 00:11:11.936 . 00:11:11.936 00:18:42 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:11:12.869 00:18:43 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:11:12.869 00:18:43 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:11:12.869 00:18:43 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 2057703 00:11:12.869 00:18:43 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo . 00:11:12.869 . 00:11:12.869 00:18:43 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:11:13.807 00:18:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:11:13.807 00:18:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:11:13.807 00:18:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 2057703 00:11:13.807 00:18:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@228 -- # echo . 00:11:13.807 . 00:11:13.807 00:18:44 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@229 -- # sleep 1 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i++ )) 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@226 -- # (( i < 60 )) 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@227 -- # kill -0 2057703 00:11:14.741 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (2057703) - No such process 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@231 -- # break 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@234 -- # kill -0 2057703 00:11:14.741 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (2057703) - No such process 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@239 -- # kill -0 2057703 00:11:14.741 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (2057703) - No such process 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@250 -- # timing_exit vhost_kill 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.741 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@252 -- # rm -rf /root/vhost_test/vhost/0 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@254 -- # return 0 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- virtio/fio_restart_vm.sh@103 -- # vhosttestfini 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- vhost/common.sh@54 -- # '[' '' == iso ']' 00:11:14.999 00:11:14.999 real 1m15.498s 00:11:14.999 user 4m57.105s 00:11:14.999 sys 0m2.238s 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_scsi_restart_vm -- common/autotest_common.sh@10 -- # set +x 00:11:14.999 ************************************ 00:11:14.999 END TEST vfio_user_virtio_scsi_restart_vm 00:11:14.999 ************************************ 00:11:14.999 00:18:45 vfio_user_qemu -- vfio_user/vfio_user.sh@19 -- # run_test vfio_user_virtio_bdevperf /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh 00:11:14.999 00:18:45 vfio_user_qemu -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:14.999 00:18:45 vfio_user_qemu -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.999 00:18:45 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:11:14.999 ************************************ 00:11:14.999 START TEST vfio_user_virtio_bdevperf 00:11:14.999 ************************************ 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/initiator_bdevperf.sh 00:11:14.999 * Looking for test storage... 00:11:14.999 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@345 -- # : 1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@353 -- # local d=2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@355 -- # echo 2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- scripts/common.sh@368 -- # return 0 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.999 --rc genhtml_branch_coverage=1 00:11:14.999 --rc genhtml_function_coverage=1 00:11:14.999 --rc genhtml_legend=1 00:11:14.999 --rc geninfo_all_blocks=1 00:11:14.999 --rc geninfo_unexecuted_blocks=1 00:11:14.999 00:11:14.999 ' 00:11:14.999 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:14.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.999 --rc genhtml_branch_coverage=1 00:11:15.000 --rc genhtml_function_coverage=1 00:11:15.000 --rc genhtml_legend=1 00:11:15.000 --rc geninfo_all_blocks=1 00:11:15.000 --rc geninfo_unexecuted_blocks=1 00:11:15.000 00:11:15.000 ' 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:15.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.000 --rc genhtml_branch_coverage=1 00:11:15.000 --rc genhtml_function_coverage=1 00:11:15.000 --rc genhtml_legend=1 00:11:15.000 --rc geninfo_all_blocks=1 00:11:15.000 --rc geninfo_unexecuted_blocks=1 00:11:15.000 00:11:15.000 ' 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:15.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.000 --rc genhtml_branch_coverage=1 00:11:15.000 --rc genhtml_function_coverage=1 00:11:15.000 --rc genhtml_legend=1 00:11:15.000 --rc geninfo_all_blocks=1 00:11:15.000 --rc geninfo_unexecuted_blocks=1 00:11:15.000 00:11:15.000 ' 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@9 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@11 -- # vfu_dir=/tmp/vfu_devices 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@12 -- # rm -rf /tmp/vfu_devices 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@13 -- # mkdir -p /tmp/vfu_devices 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@17 -- # spdk_tgt_pid=2070896 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@18 -- # waitforlisten 2070896 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0xf -L vfu_virtio 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2070896 ']' 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.000 00:18:45 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:15.258 [2024-10-09 00:18:45.717507] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:11:15.258 [2024-10-09 00:18:45.717606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070896 ] 00:11:15.258 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.258 [2024-10-09 00:18:45.821593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.515 [2024-10-09 00:18:46.014354] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.515 [2024-10-09 00:18:46.014424] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.515 [2024-10-09 00:18:46.014486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.515 [2024-10-09 00:18:46.014509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.448 00:18:46 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.448 00:18:46 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:11:16.448 00:18:46 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc0 64 512 00:11:16.706 malloc0 00:11:16.706 00:18:47 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@21 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc1 64 512 00:11:16.964 malloc1 00:11:16.964 00:18:47 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@22 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create -b malloc2 64 512 00:11:17.222 malloc2 00:11:17.222 00:18:47 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_tgt_set_base_path /tmp/vfu_devices 00:11:17.480 00:18:47 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@27 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_blk_endpoint vfu.blk --bdev-name malloc0 --cpumask=0x1 --num-queues=2 --qsize=256 --packed-ring 00:11:17.480 [2024-10-09 00:18:48.039598] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.blk_bar4, devmem_fd 489 00:11:17.480 [2024-10-09 00:18:48.039635] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.blk: get device information, fd 489 00:11:17.480 [2024-10-09 00:18:48.039747] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 0 00:11:17.480 [2024-10-09 00:18:48.039768] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 1 00:11:17.480 [2024-10-09 00:18:48.039775] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 2 00:11:17.480 [2024-10-09 00:18:48.039783] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.blk: get vendor capability, idx 3 00:11:17.480 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_create_scsi_endpoint vfu.scsi --cpumask 0x2 --num-io-queues=2 --qsize=256 --packed-ring 00:11:17.739 [2024-10-09 00:18:48.252497] vfu_virtio.c:1533:vfu_virtio_endpoint_setup: *DEBUG*: mmap file /tmp/vfu_devices/vfu.scsi_bar4, devmem_fd 593 00:11:17.739 [2024-10-09 00:18:48.252526] vfu_virtio.c:1695:vfu_virtio_get_device_info: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get device information, fd 593 00:11:17.739 [2024-10-09 00:18:48.252572] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 0 00:11:17.739 [2024-10-09 00:18:48.252584] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 1 00:11:17.739 [2024-10-09 00:18:48.252593] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 2 00:11:17.739 [2024-10-09 00:18:48.252605] vfu_virtio.c:1746:vfu_virtio_get_vendor_capability: *DEBUG*: /tmp/vfu_devices/vfu.scsi: get vendor capability, idx 3 00:11:17.739 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=0 --bdev-name malloc1 00:11:17.997 [2024-10-09 00:18:48.449438] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 0 using bdev 'malloc1' 00:11:17.997 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_scsi_add_target vfu.scsi --scsi-target-num=1 --bdev-name malloc2 00:11:18.255 [2024-10-09 00:18:48.678420] vfu_virtio_scsi.c: 886:vfu_virtio_scsi_add_target: *NOTICE*: vfu.scsi: added SCSI target 1 using bdev 'malloc2' 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@37 -- # bdevperf=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@38 -- # bdevperf_rpc_sock=/tmp/bdevperf.sock 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@41 -- # bdevperf_pid=2071365 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@42 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@40 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/examples/bdevperf -r /tmp/bdevperf.sock -g -s 2048 -q 256 -o 4096 -w randrw -M 50 -t 30 -m 0xf0 -L vfio_pci -L virtio_vfio_user 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@43 -- # waitforlisten 2071365 /tmp/bdevperf.sock 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2071365 ']' 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/bdevperf.sock 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock...' 00:11:18.255 Waiting for process to start up and listen on UNIX domain socket /tmp/bdevperf.sock... 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.255 00:18:48 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:18.255 [2024-10-09 00:18:48.787100] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:11:18.255 [2024-10-09 00:18:48.787192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf0 -m 2048 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071365 ] 00:11:18.255 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.189 [2024-10-09 00:18:49.599715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.189 [2024-10-09 00:18:49.795824] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:19.189 [2024-10-09 00:18:49.795914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:19.189 [2024-10-09 00:18:49.795982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:19.189 [2024-10-09 00:18:49.796005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:11:19.754 00:18:50 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.754 00:18:50 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:11:19.754 00:18:50 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type scsi --trtype vfio-user --traddr /tmp/vfu_devices/vfu.scsi VirtioScsi0 00:11:20.014 [2024-10-09 00:18:50.483639] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.scsi: attached successfully 00:11:20.014 [2024-10-09 00:18:50.485746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.486731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.487749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.488754] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.489785] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:20.014 [2024-10-09 00:18:50.489808] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7fa6d773d000 00:11:20.014 [2024-10-09 00:18:50.490777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.491781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.492794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.493800] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.494812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.014 [2024-10-09 00:18:50.497049] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000 00:11:20.014 [2024-10-09 00:18:50.510880] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /tmp/vfu_devices/vfu.scsi Setup Successfully 00:11:20.014 [2024-10-09 00:18:50.511977] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x4 00:11:20.014 [2024-10-09 00:18:50.512984] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x2000-0x2003, len = 4 00:11:20.014 [2024-10-09 00:18:50.513024] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 0 00:11:20.014 [2024-10-09 00:18:50.513982] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.513995] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0 00:11:20.014 [2024-10-09 00:18:50.514003] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0 00:11:20.014 [2024-10-09 00:18:50.514012] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting 00:11:20.014 [2024-10-09 00:18:50.514996] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.515007] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0 00:11:20.014 [2024-10-09 00:18:50.515026] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 0 00:11:20.014 [2024-10-09 00:18:50.516004] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.516014] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0 00:11:20.014 [2024-10-09 00:18:50.516034] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 0 00:11:20.014 [2024-10-09 00:18:50.516053] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 1 00:11:20.014 [2024-10-09 00:18:50.517021] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.517035] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x1 00:11:20.014 [2024-10-09 00:18:50.517041] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1 00:11:20.014 [2024-10-09 00:18:50.518032] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.518040] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1 00:11:20.014 [2024-10-09 00:18:50.518067] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 1 00:11:20.014 [2024-10-09 00:18:50.519038] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.519045] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x1 00:11:20.014 [2024-10-09 00:18:50.519068] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 1 00:11:20.014 [2024-10-09 00:18:50.519082] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 3 00:11:20.014 [2024-10-09 00:18:50.520041] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.520049] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x3 00:11:20.014 [2024-10-09 00:18:50.520056] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3 00:11:20.014 [2024-10-09 00:18:50.521044] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.014 [2024-10-09 00:18:50.521053] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3 00:11:20.015 [2024-10-09 00:18:50.521075] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 3 00:11:20.015 [2024-10-09 00:18:50.522056] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4 00:11:20.015 [2024-10-09 00:18:50.522070] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x0 00:11:20.015 [2024-10-09 00:18:50.523061] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4 00:11:20.015 [2024-10-09 00:18:50.523071] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_LO with 0x10000007 00:11:20.015 [2024-10-09 00:18:50.524063] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x0-0x3, len = 4 00:11:20.015 [2024-10-09 00:18:50.524072] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_DFSELECT with 0x1 00:11:20.015 [2024-10-09 00:18:50.525070] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x4-0x7, len = 4 00:11:20.015 [2024-10-09 00:18:50.525083] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_DF_HI with 0x5 00:11:20.015 [2024-10-09 00:18:50.525107] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10000007 00:11:20.015 [2024-10-09 00:18:50.526078] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4 00:11:20.015 [2024-10-09 00:18:50.526088] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x0 00:11:20.015 [2024-10-09 00:18:50.527082] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4 00:11:20.015 [2024-10-09 00:18:50.527092] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_LO with 0x3 00:11:20.015 [2024-10-09 00:18:50.527099] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x3 00:11:20.015 [2024-10-09 00:18:50.528087] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x8-0xB, len = 4 00:11:20.015 [2024-10-09 00:18:50.528097] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GFSELECT with 0x1 00:11:20.015 [2024-10-09 00:18:50.529093] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0xC-0xF, len = 4 00:11:20.015 [2024-10-09 00:18:50.529101] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_GF_HI with 0x1 00:11:20.015 [2024-10-09 00:18:50.529109] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.scsi: negotiated features 0x100000003 00:11:20.015 [2024-10-09 00:18:50.529132] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100000003 00:11:20.015 [2024-10-09 00:18:50.530105] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.015 [2024-10-09 00:18:50.530117] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x3 00:11:20.015 [2024-10-09 00:18:50.530143] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 3 00:11:20.015 [2024-10-09 00:18:50.530159] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status b 00:11:20.015 [2024-10-09 00:18:50.531108] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1 00:11:20.015 [2024-10-09 00:18:50.531118] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xb 00:11:20.015 [2024-10-09 00:18:50.531124] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b 00:11:20.015 [2024-10-09 00:18:50.532124] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.015 [2024-10-09 00:18:50.532131] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb 00:11:20.015 [2024-10-09 00:18:50.532162] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status b 00:11:20.015 [2024-10-09 00:18:50.533135] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.015 [2024-10-09 00:18:50.533143] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0 00:11:20.015 [2024-10-09 00:18:50.534150] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2 00:11:20.015 [2024-10-09 00:18:50.534158] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 0 PCI_COMMON_Q_SIZE with 0x100 00:11:20.015 [2024-10-09 00:18:50.534180] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256 00:11:20.015 [2024-10-09 00:18:50.535156] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.015 [2024-10-09 00:18:50.535164] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0 00:11:20.015 [2024-10-09 00:18:50.536166] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4 00:11:20.015 [2024-10-09 00:18:50.536174] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x67710000 00:11:20.015 [2024-10-09 00:18:50.537183] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4 00:11:20.015 [2024-10-09 00:18:50.537190] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000 00:11:20.015 [2024-10-09 00:18:50.538192] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4 00:11:20.015 [2024-10-09 00:18:50.538200] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x67711000 00:11:20.015 [2024-10-09 00:18:50.539205] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4 00:11:20.015 [2024-10-09 00:18:50.539212] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000 00:11:20.015 [2024-10-09 00:18:50.540216] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4 00:11:20.015 [2024-10-09 00:18:50.540235] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x67712000 00:11:20.015 [2024-10-09 00:18:50.541226] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4 00:11:20.015 [2024-10-09 00:18:50.541234] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000 00:11:20.015 [2024-10-09 00:18:50.542235] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2 00:11:20.015 [2024-10-09 00:18:50.542242] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x0 00:11:20.015 [2024-10-09 00:18:50.543243] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:20.015 [2024-10-09 00:18:50.543251] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1 00:11:20.015 [2024-10-09 00:18:50.543259] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 0 00:11:20.015 [2024-10-09 00:18:50.543265] vfu_virtio.c: 71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 0 00:11:20.015 [2024-10-09 00:18:50.543291] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 0 successfully 00:11:20.015 [2024-10-09 00:18:50.543320] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses: 00:11:20.015 [2024-10-09 00:18:50.543343] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: desc_addr: 200067710000 00:11:20.015 [2024-10-09 00:18:50.543361] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: aval_addr: 200067711000 00:11:20.015 [2024-10-09 00:18:50.543377] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: used_addr: 200067712000 00:11:20.015 [2024-10-09 00:18:50.544247] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.015 [2024-10-09 00:18:50.544257] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1 00:11:20.015 [2024-10-09 00:18:50.545259] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2 00:11:20.015 [2024-10-09 00:18:50.545269] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 1 PCI_COMMON_Q_SIZE with 0x100 00:11:20.015 [2024-10-09 00:18:50.545300] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256 00:11:20.015 [2024-10-09 00:18:50.546270] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.015 [2024-10-09 00:18:50.546283] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1 00:11:20.015 [2024-10-09 00:18:50.547277] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4 00:11:20.015 [2024-10-09 00:18:50.547287] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x6770c000 00:11:20.015 [2024-10-09 00:18:50.548287] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4 00:11:20.015 [2024-10-09 00:18:50.548296] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000 00:11:20.015 [2024-10-09 00:18:50.549307] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4 00:11:20.015 [2024-10-09 00:18:50.549317] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x6770d000 00:11:20.015 [2024-10-09 00:18:50.550306] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4 00:11:20.015 [2024-10-09 00:18:50.550316] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000 00:11:20.015 [2024-10-09 00:18:50.551315] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4 00:11:20.015 [2024-10-09 00:18:50.551327] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x6770e000 00:11:20.015 [2024-10-09 00:18:50.552322] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4 00:11:20.015 [2024-10-09 00:18:50.552332] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000 00:11:20.015 [2024-10-09 00:18:50.553327] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2 00:11:20.015 [2024-10-09 00:18:50.553336] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x1 00:11:20.015 [2024-10-09 00:18:50.554332] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:20.015 [2024-10-09 00:18:50.554344] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1 00:11:20.015 [2024-10-09 00:18:50.554350] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 1 00:11:20.015 [2024-10-09 00:18:50.554357] vfu_virtio.c: 71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 1 00:11:20.015 [2024-10-09 00:18:50.554365] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 1 successfully 00:11:20.015 [2024-10-09 00:18:50.554392] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses: 00:11:20.015 [2024-10-09 00:18:50.554422] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: desc_addr: 20006770c000 00:11:20.015 [2024-10-09 00:18:50.554439] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: aval_addr: 20006770d000 00:11:20.015 [2024-10-09 00:18:50.554459] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: used_addr: 20006770e000 00:11:20.015 [2024-10-09 00:18:50.555352] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.015 [2024-10-09 00:18:50.555360] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2 00:11:20.015 [2024-10-09 00:18:50.556360] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2 00:11:20.015 [2024-10-09 00:18:50.556368] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 2 PCI_COMMON_Q_SIZE with 0x100 00:11:20.015 [2024-10-09 00:18:50.556393] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 2, size 256 00:11:20.015 [2024-10-09 00:18:50.557368] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.016 [2024-10-09 00:18:50.557376] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2 00:11:20.016 [2024-10-09 00:18:50.558373] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4 00:11:20.016 [2024-10-09 00:18:50.558380] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCLO with 0x67708000 00:11:20.016 [2024-10-09 00:18:50.559385] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4 00:11:20.016 [2024-10-09 00:18:50.559393] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_DESCHI with 0x2000 00:11:20.016 [2024-10-09 00:18:50.560396] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4 00:11:20.016 [2024-10-09 00:18:50.560403] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILLO with 0x67709000 00:11:20.016 [2024-10-09 00:18:50.561409] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4 00:11:20.016 [2024-10-09 00:18:50.561416] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_AVAILHI with 0x2000 00:11:20.016 [2024-10-09 00:18:50.562416] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4 00:11:20.016 [2024-10-09 00:18:50.562426] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDLO with 0x6770a000 00:11:20.016 [2024-10-09 00:18:50.563421] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4 00:11:20.016 [2024-10-09 00:18:50.563429] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 2 PCI_COMMON_Q_USEDHI with 0x2000 00:11:20.016 [2024-10-09 00:18:50.564435] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2 00:11:20.016 [2024-10-09 00:18:50.564442] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x2 00:11:20.016 [2024-10-09 00:18:50.565448] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:20.016 [2024-10-09 00:18:50.565456] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1 00:11:20.016 [2024-10-09 00:18:50.565464] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 2 00:11:20.016 [2024-10-09 00:18:50.565469] vfu_virtio.c: 71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 2 00:11:20.016 [2024-10-09 00:18:50.565478] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 2 successfully 00:11:20.016 [2024-10-09 00:18:50.565513] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 2 addresses: 00:11:20.016 [2024-10-09 00:18:50.565538] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: desc_addr: 200067708000 00:11:20.016 [2024-10-09 00:18:50.565562] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: aval_addr: 200067709000 00:11:20.016 [2024-10-09 00:18:50.565578] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: used_addr: 20006770a000 00:11:20.016 [2024-10-09 00:18:50.566458] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.016 [2024-10-09 00:18:50.566471] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3 00:11:20.016 [2024-10-09 00:18:50.567468] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x18-0x19, len = 2 00:11:20.016 [2024-10-09 00:18:50.567481] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ queue 3 PCI_COMMON_Q_SIZE with 0x100 00:11:20.016 [2024-10-09 00:18:50.567512] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 3, size 256 00:11:20.016 [2024-10-09 00:18:50.568474] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:20.016 [2024-10-09 00:18:50.568486] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3 00:11:20.016 [2024-10-09 00:18:50.569490] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x20-0x23, len = 4 00:11:20.016 [2024-10-09 00:18:50.569500] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCLO with 0x67704000 00:11:20.016 [2024-10-09 00:18:50.570493] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x24-0x27, len = 4 00:11:20.016 [2024-10-09 00:18:50.570503] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_DESCHI with 0x2000 00:11:20.016 [2024-10-09 00:18:50.571507] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x28-0x2B, len = 4 00:11:20.016 [2024-10-09 00:18:50.571516] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILLO with 0x67705000 00:11:20.016 [2024-10-09 00:18:50.572508] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x2C-0x2F, len = 4 00:11:20.016 [2024-10-09 00:18:50.572518] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_AVAILHI with 0x2000 00:11:20.016 [2024-10-09 00:18:50.573512] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x30-0x33, len = 4 00:11:20.016 [2024-10-09 00:18:50.573524] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDLO with 0x67706000 00:11:20.016 [2024-10-09 00:18:50.574522] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x34-0x37, len = 4 00:11:20.016 [2024-10-09 00:18:50.574531] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE queue 3 PCI_COMMON_Q_USEDHI with 0x2000 00:11:20.016 [2024-10-09 00:18:50.575524] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x1E-0x1F, len = 2 00:11:20.016 [2024-10-09 00:18:50.575537] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_Q_NOFF with 0x3 00:11:20.016 [2024-10-09 00:18:50.576532] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:20.016 [2024-10-09 00:18:50.576541] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x1 00:11:20.016 [2024-10-09 00:18:50.576547] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.scsi: enable vq 3 00:11:20.016 [2024-10-09 00:18:50.576553] vfu_virtio.c: 71:virtio_dev_map_vq: *DEBUG*: vfu.scsi: try to map vq 3 00:11:20.016 [2024-10-09 00:18:50.576561] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.scsi: map vq 3 successfully 00:11:20.016 [2024-10-09 00:18:50.576586] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 3 addresses: 00:11:20.016 [2024-10-09 00:18:50.576616] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: desc_addr: 200067704000 00:11:20.016 [2024-10-09 00:18:50.576633] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: aval_addr: 200067705000 00:11:20.016 [2024-10-09 00:18:50.576652] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: used_addr: 200067706000 00:11:20.016 [2024-10-09 00:18:50.577547] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.016 [2024-10-09 00:18:50.577554] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xb 00:11:20.016 [2024-10-09 00:18:50.577580] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status b 00:11:20.016 [2024-10-09 00:18:50.577614] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status f 00:11:20.016 [2024-10-09 00:18:50.578556] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1 00:11:20.016 [2024-10-09 00:18:50.578563] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0xf 00:11:20.016 [2024-10-09 00:18:50.578571] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f 00:11:20.016 [2024-10-09 00:18:50.578576] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.scsi 00:11:20.016 [2024-10-09 00:18:50.580307] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.scsi is started with ret 0 00:11:20.016 [2024-10-09 00:18:50.581365] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:20.016 [2024-10-09 00:18:50.581378] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0xf 00:11:20.016 [2024-10-09 00:18:50.581413] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status f 00:11:20.016 VirtioScsi0t0 VirtioScsi0t1 00:11:20.016 00:18:50 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@46 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /tmp/bdevperf.sock bdev_virtio_attach_controller --dev-type blk --trtype vfio-user --traddr /tmp/vfu_devices/vfu.blk VirtioBlk0 00:11:20.275 [2024-10-09 00:18:50.804313] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /tmp/vfu_devices/vfu.blk: attached successfully 00:11:20.275 [2024-10-09 00:18:50.806414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.807424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.808435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.809451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.810477] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x4000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:20.275 [2024-10-09 00:18:50.810508] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x3000, Map addr 0x7fa6d7719000 00:11:20.275 [2024-10-09 00:18:50.811473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.812504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.813502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.814516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.815523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:20.275 [2024-10-09 00:18:50.817146] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000 00:11:20.275 [2024-10-09 00:18:50.830080] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user1, Path /tmp/vfu_devices/vfu.blk Setup Successfully 00:11:20.275 [2024-10-09 00:18:50.831675] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 0 00:11:20.275 [2024-10-09 00:18:50.832681] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1 00:11:20.275 [2024-10-09 00:18:50.832701] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0 00:11:20.275 [2024-10-09 00:18:50.832714] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 0 00:11:20.276 [2024-10-09 00:18:50.832720] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting 00:11:20.276 [2024-10-09 00:18:50.833688] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.833697] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0 00:11:20.276 [2024-10-09 00:18:50.833716] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 0 00:11:20.276 [2024-10-09 00:18:50.834696] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.834704] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0 00:11:20.276 [2024-10-09 00:18:50.834721] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 0 00:11:20.276 [2024-10-09 00:18:50.834734] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 1 00:11:20.276 [2024-10-09 00:18:50.835705] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.835714] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x1 00:11:20.276 [2024-10-09 00:18:50.835722] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 0, set status 1 00:11:20.276 [2024-10-09 00:18:50.836718] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.836730] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1 00:11:20.276 [2024-10-09 00:18:50.836746] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 1 00:11:20.276 [2024-10-09 00:18:50.837722] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.837732] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x1 00:11:20.276 [2024-10-09 00:18:50.837755] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 1 00:11:20.276 [2024-10-09 00:18:50.837767] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 3 00:11:20.276 [2024-10-09 00:18:50.838725] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.838735] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x3 00:11:20.276 [2024-10-09 00:18:50.838741] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 1, set status 3 00:11:20.276 [2024-10-09 00:18:50.839739] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.839747] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3 00:11:20.276 [2024-10-09 00:18:50.839769] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 3 00:11:20.276 [2024-10-09 00:18:50.840741] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4 00:11:20.276 [2024-10-09 00:18:50.840750] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x0 00:11:20.276 [2024-10-09 00:18:50.841753] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4 00:11:20.276 [2024-10-09 00:18:50.841761] vfu_virtio.c:1072:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_LO with 0x10007646 00:11:20.276 [2024-10-09 00:18:50.842765] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x0-0x3, len = 4 00:11:20.276 [2024-10-09 00:18:50.842773] vfu_virtio.c: 937:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_DFSELECT with 0x1 00:11:20.276 [2024-10-09 00:18:50.843768] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x4-0x7, len = 4 00:11:20.276 [2024-10-09 00:18:50.843776] vfu_virtio.c:1067:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_DF_HI with 0x5 00:11:20.276 [2024-10-09 00:18:50.843798] virtio_vfio_user.c: 127:virtio_vfio_user_get_features: *DEBUG*: feature_hi 0x5, feature_low 0x10007646 00:11:20.276 [2024-10-09 00:18:50.844785] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4 00:11:20.276 [2024-10-09 00:18:50.844794] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x0 00:11:20.276 [2024-10-09 00:18:50.845787] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4 00:11:20.276 [2024-10-09 00:18:50.845795] vfu_virtio.c: 956:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_LO with 0x3446 00:11:20.276 [2024-10-09 00:18:50.845803] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x3446 00:11:20.276 [2024-10-09 00:18:50.846791] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x8-0xB, len = 4 00:11:20.276 [2024-10-09 00:18:50.846800] vfu_virtio.c: 943:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GFSELECT with 0x1 00:11:20.276 [2024-10-09 00:18:50.847807] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0xC-0xF, len = 4 00:11:20.276 [2024-10-09 00:18:50.847816] vfu_virtio.c: 951:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_GF_HI with 0x1 00:11:20.276 [2024-10-09 00:18:50.847823] vfu_virtio.c: 255:virtio_dev_set_features: *DEBUG*: vfu.blk: negotiated features 0x100003446 00:11:20.276 [2024-10-09 00:18:50.847847] virtio_vfio_user.c: 176:virtio_vfio_user_set_features: *DEBUG*: features 0x100003446 00:11:20.276 [2024-10-09 00:18:50.848824] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.848832] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x3 00:11:20.276 [2024-10-09 00:18:50.848855] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 3 00:11:20.276 [2024-10-09 00:18:50.848885] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status b 00:11:20.276 [2024-10-09 00:18:50.849827] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.849835] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xb 00:11:20.276 [2024-10-09 00:18:50.849843] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status 3, set status b 00:11:20.276 [2024-10-09 00:18:50.850834] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.276 [2024-10-09 00:18:50.850847] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb 00:11:20.276 [2024-10-09 00:18:50.850870] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status b 00:11:20.276 [2024-10-09 00:18:50.850900] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2 00:11:20.276 [2024-10-09 00:18:50.851851] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2 00:11:20.276 [2024-10-09 00:18:50.851878] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x14, length 0x4 00:11:20.276 [2024-10-09 00:18:50.852867] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2014-0x2017, len = 4 00:11:20.276 [2024-10-09 00:18:50.852896] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x0, length 0x8 00:11:20.276 [2024-10-09 00:18:50.853875] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2000-0x2007, len = 8 00:11:20.276 [2024-10-09 00:18:50.853901] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x22, length 0x2 00:11:20.276 [2024-10-09 00:18:50.854884] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2022-0x2023, len = 2 00:11:20.276 [2024-10-09 00:18:50.854914] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0x8, length 0x4 00:11:20.276 [2024-10-09 00:18:50.855891] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x2008-0x200B, len = 4 00:11:20.276 [2024-10-09 00:18:50.855916] virtio_vfio_user.c: 32:virtio_vfio_user_read_dev_config: *DEBUG*: offset 0xc, length 0x4 00:11:20.276 [2024-10-09 00:18:50.856923] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x200C-0x200F, len = 4 00:11:20.276 [2024-10-09 00:18:50.857905] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2 00:11:20.276 [2024-10-09 00:18:50.857915] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0 00:11:20.276 [2024-10-09 00:18:50.858909] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2 00:11:20.276 [2024-10-09 00:18:50.858923] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 0 PCI_COMMON_Q_SIZE with 0x100 00:11:20.276 [2024-10-09 00:18:50.858956] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 0, size 256 00:11:20.276 [2024-10-09 00:18:50.859914] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2 00:11:20.276 [2024-10-09 00:18:50.859925] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0 00:11:20.276 [2024-10-09 00:18:50.860924] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4 00:11:20.276 [2024-10-09 00:18:50.860935] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCLO with 0x673fc000 00:11:20.276 [2024-10-09 00:18:50.861932] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4 00:11:20.276 [2024-10-09 00:18:50.861943] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_DESCHI with 0x2000 00:11:20.276 [2024-10-09 00:18:50.862940] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4 00:11:20.276 [2024-10-09 00:18:50.862955] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILLO with 0x673fd000 00:11:20.276 [2024-10-09 00:18:50.863946] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4 00:11:20.276 [2024-10-09 00:18:50.863956] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_AVAILHI with 0x2000 00:11:20.276 [2024-10-09 00:18:50.864955] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4 00:11:20.276 [2024-10-09 00:18:50.864966] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDLO with 0x673fe000 00:11:20.276 [2024-10-09 00:18:50.865968] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4 00:11:20.276 [2024-10-09 00:18:50.865978] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 0 PCI_COMMON_Q_USEDHI with 0x2000 00:11:20.276 [2024-10-09 00:18:50.866974] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2 00:11:20.276 [2024-10-09 00:18:50.866984] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x0 00:11:20.276 [2024-10-09 00:18:50.867982] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2 00:11:20.276 [2024-10-09 00:18:50.867993] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1 00:11:20.276 [2024-10-09 00:18:50.868001] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 0 00:11:20.276 [2024-10-09 00:18:50.868009] vfu_virtio.c: 71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 0 00:11:20.276 [2024-10-09 00:18:50.868024] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 0 successfully 00:11:20.276 [2024-10-09 00:18:50.868053] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 0 addresses: 00:11:20.276 [2024-10-09 00:18:50.868102] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: desc_addr: 2000673fc000 00:11:20.276 [2024-10-09 00:18:50.868121] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: aval_addr: 2000673fd000 00:11:20.276 [2024-10-09 00:18:50.868143] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: used_addr: 2000673fe000 00:11:20.276 [2024-10-09 00:18:50.868990] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2 00:11:20.276 [2024-10-09 00:18:50.868999] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1 00:11:20.276 [2024-10-09 00:18:50.869991] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x18-0x19, len = 2 00:11:20.276 [2024-10-09 00:18:50.870002] vfu_virtio.c:1135:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ queue 1 PCI_COMMON_Q_SIZE with 0x100 00:11:20.276 [2024-10-09 00:18:50.870027] virtio_vfio_user.c: 216:virtio_vfio_user_get_queue_size: *DEBUG*: queue 1, size 256 00:11:20.276 [2024-10-09 00:18:50.871000] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2 00:11:20.276 [2024-10-09 00:18:50.871009] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1 00:11:20.276 [2024-10-09 00:18:50.872014] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x20-0x23, len = 4 00:11:20.276 [2024-10-09 00:18:50.872022] vfu_virtio.c:1020:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCLO with 0x673f8000 00:11:20.277 [2024-10-09 00:18:50.873019] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x24-0x27, len = 4 00:11:20.277 [2024-10-09 00:18:50.873028] vfu_virtio.c:1025:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_DESCHI with 0x2000 00:11:20.277 [2024-10-09 00:18:50.874022] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x28-0x2B, len = 4 00:11:20.277 [2024-10-09 00:18:50.874032] vfu_virtio.c:1030:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILLO with 0x673f9000 00:11:20.277 [2024-10-09 00:18:50.875024] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x2C-0x2F, len = 4 00:11:20.277 [2024-10-09 00:18:50.875032] vfu_virtio.c:1035:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_AVAILHI with 0x2000 00:11:20.277 [2024-10-09 00:18:50.876036] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x30-0x33, len = 4 00:11:20.277 [2024-10-09 00:18:50.876044] vfu_virtio.c:1040:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDLO with 0x673fa000 00:11:20.277 [2024-10-09 00:18:50.877046] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x34-0x37, len = 4 00:11:20.277 [2024-10-09 00:18:50.877054] vfu_virtio.c:1045:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE queue 1 PCI_COMMON_Q_USEDHI with 0x2000 00:11:20.277 [2024-10-09 00:18:50.878055] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x1E-0x1F, len = 2 00:11:20.277 [2024-10-09 00:18:50.878070] vfu_virtio.c:1123:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_Q_NOFF with 0x1 00:11:20.277 [2024-10-09 00:18:50.879057] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2 00:11:20.277 [2024-10-09 00:18:50.879068] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x1 00:11:20.277 [2024-10-09 00:18:50.879076] vfu_virtio.c: 267:virtio_dev_enable_vq: *DEBUG*: vfu.blk: enable vq 1 00:11:20.277 [2024-10-09 00:18:50.879082] vfu_virtio.c: 71:virtio_dev_map_vq: *DEBUG*: vfu.blk: try to map vq 1 00:11:20.277 [2024-10-09 00:18:50.879091] vfu_virtio.c: 107:virtio_dev_map_vq: *DEBUG*: vfu.blk: map vq 1 successfully 00:11:20.277 [2024-10-09 00:18:50.879125] virtio_vfio_user.c: 331:virtio_vfio_user_setup_queue: *DEBUG*: queue 1 addresses: 00:11:20.277 [2024-10-09 00:18:50.879150] virtio_vfio_user.c: 332:virtio_vfio_user_setup_queue: *DEBUG*: desc_addr: 2000673f8000 00:11:20.277 [2024-10-09 00:18:50.879170] virtio_vfio_user.c: 333:virtio_vfio_user_setup_queue: *DEBUG*: aval_addr: 2000673f9000 00:11:20.277 [2024-10-09 00:18:50.879187] virtio_vfio_user.c: 334:virtio_vfio_user_setup_queue: *DEBUG*: used_addr: 2000673fa000 00:11:20.277 [2024-10-09 00:18:50.880068] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.277 [2024-10-09 00:18:50.880079] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xb 00:11:20.277 [2024-10-09 00:18:50.880111] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status b 00:11:20.277 [2024-10-09 00:18:50.880141] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status f 00:11:20.277 [2024-10-09 00:18:50.881073] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1 00:11:20.277 [2024-10-09 00:18:50.881083] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0xf 00:11:20.277 [2024-10-09 00:18:50.881089] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status b, set status f 00:11:20.277 [2024-10-09 00:18:50.881097] vfu_virtio.c:1365:vfu_virtio_dev_start: *DEBUG*: start vfu.blk 00:11:20.277 [2024-10-09 00:18:50.882740] vfu_virtio.c:1377:vfu_virtio_dev_start: *DEBUG*: vfu.blk is started with ret 0 00:11:20.277 [2024-10-09 00:18:50.882803] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:20.277 [2024-10-09 00:18:50.882812] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0xf 00:11:20.277 [2024-10-09 00:18:50.882844] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status f 00:11:20.277 VirtioBlk0 00:11:20.536 00:18:50 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@50 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /tmp/bdevperf.sock perform_tests 00:11:20.537 Running I/O for 30 seconds... 00:11:22.566 111512.00 IOPS, 435.59 MiB/s [2024-10-08T22:18:54.135Z] 111408.00 IOPS, 435.19 MiB/s [2024-10-08T22:18:55.068Z] 111249.33 IOPS, 434.57 MiB/s [2024-10-08T22:18:56.442Z] 111274.00 IOPS, 434.66 MiB/s [2024-10-08T22:18:57.376Z] 111311.00 IOPS, 434.81 MiB/s [2024-10-08T22:18:58.312Z] 111331.33 IOPS, 434.89 MiB/s [2024-10-08T22:18:59.243Z] 111332.14 IOPS, 434.89 MiB/s [2024-10-08T22:19:00.177Z] 111330.38 IOPS, 434.88 MiB/s [2024-10-08T22:19:01.109Z] 111328.11 IOPS, 434.88 MiB/s [2024-10-08T22:19:02.067Z] 111299.30 IOPS, 434.76 MiB/s [2024-10-08T22:19:03.448Z] 111309.55 IOPS, 434.80 MiB/s [2024-10-08T22:19:04.380Z] 111314.25 IOPS, 434.82 MiB/s [2024-10-08T22:19:05.315Z] 111321.00 IOPS, 434.85 MiB/s [2024-10-08T22:19:06.249Z] 111325.21 IOPS, 434.86 MiB/s [2024-10-08T22:19:07.183Z] 111334.27 IOPS, 434.90 MiB/s [2024-10-08T22:19:08.148Z] 111339.06 IOPS, 434.92 MiB/s [2024-10-08T22:19:09.081Z] 111339.59 IOPS, 434.92 MiB/s [2024-10-08T22:19:10.466Z] 111339.50 IOPS, 434.92 MiB/s [2024-10-08T22:19:11.398Z] 111331.42 IOPS, 434.89 MiB/s [2024-10-08T22:19:12.330Z] 111324.25 IOPS, 434.86 MiB/s [2024-10-08T22:19:13.292Z] 111329.00 IOPS, 434.88 MiB/s [2024-10-08T22:19:14.224Z] 111327.23 IOPS, 434.87 MiB/s [2024-10-08T22:19:15.155Z] 111330.83 IOPS, 434.89 MiB/s [2024-10-08T22:19:16.101Z] 111328.88 IOPS, 434.88 MiB/s [2024-10-08T22:19:17.485Z] 111333.48 IOPS, 434.90 MiB/s [2024-10-08T22:19:18.418Z] 111337.58 IOPS, 434.91 MiB/s [2024-10-08T22:19:19.351Z] 111342.93 IOPS, 434.93 MiB/s [2024-10-08T22:19:20.284Z] 111340.68 IOPS, 434.92 MiB/s [2024-10-08T22:19:21.218Z] 111334.79 IOPS, 434.90 MiB/s [2024-10-08T22:19:21.218Z] 111328.30 IOPS, 434.88 MiB/s 00:11:50.583 Latency(us) 00:11:50.583 [2024-10-08T22:19:21.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.583 Job: VirtioScsi0t0 (Core Mask 0x10, workload: randrw, percentage: 50, depth: 256, IO size: 4096) 00:11:50.583 VirtioScsi0t0 : 30.01 25924.36 101.27 0.00 0.00 9869.05 1724.22 12420.63 00:11:50.583 Job: VirtioScsi0t1 (Core Mask 0x20, workload: randrw, percentage: 50, depth: 256, IO size: 4096) 00:11:50.583 VirtioScsi0t1 : 30.01 25924.08 101.27 0.00 0.00 9869.25 1763.23 11671.65 00:11:50.583 Job: VirtioBlk0 (Core Mask 0x40, workload: randrw, percentage: 50, depth: 256, IO size: 4096) 00:11:50.583 VirtioBlk0 : 30.00 59474.27 232.32 0.00 0.00 4300.17 1763.23 6616.02 00:11:50.583 [2024-10-08T22:19:21.218Z] =================================================================================================================== 00:11:50.583 [2024-10-08T22:19:21.218Z] Total : 111322.72 434.85 0.00 0.00 6894.05 1724.22 12420.63 00:11:50.583 { 00:11:50.583 "results": [ 00:11:50.583 { 00:11:50.583 "job": "VirtioScsi0t0", 00:11:50.583 "core_mask": "0x10", 00:11:50.583 "workload": "randrw", 00:11:50.583 "percentage": 50, 00:11:50.583 "status": "finished", 00:11:50.583 "queue_depth": 256, 00:11:50.583 "io_size": 4096, 00:11:50.583 "runtime": 30.007447, 00:11:50.583 "iops": 25924.364708533853, 00:11:50.583 "mibps": 101.26704964271036, 00:11:50.583 "io_failed": 0, 00:11:50.583 "io_timeout": 0, 00:11:50.583 "avg_latency_us": 9869.054768234186, 00:11:50.583 "min_latency_us": 1724.2209523809524, 00:11:50.583 "max_latency_us": 12420.63238095238 00:11:50.583 }, 00:11:50.583 { 00:11:50.583 "job": "VirtioScsi0t1", 00:11:50.583 "core_mask": "0x20", 00:11:50.583 "workload": "randrw", 00:11:50.583 "percentage": 50, 00:11:50.583 "status": "finished", 00:11:50.583 "queue_depth": 256, 00:11:50.583 "io_size": 4096, 00:11:50.583 "runtime": 30.007848, 00:11:50.583 "iops": 25924.08492605001, 00:11:50.583 "mibps": 101.26595674238285, 00:11:50.583 "io_failed": 0, 00:11:50.583 "io_timeout": 0, 00:11:50.583 "avg_latency_us": 9869.245026361305, 00:11:50.583 "min_latency_us": 1763.230476190476, 00:11:50.583 "max_latency_us": 11671.649523809523 00:11:50.583 }, 00:11:50.583 { 00:11:50.583 "job": "VirtioBlk0", 00:11:50.583 "core_mask": "0x40", 00:11:50.583 "workload": "randrw", 00:11:50.583 "percentage": 50, 00:11:50.583 "status": "finished", 00:11:50.583 "queue_depth": 256, 00:11:50.583 "io_size": 4096, 00:11:50.583 "runtime": 30.004725, 00:11:50.583 "iops": 59474.266136416845, 00:11:50.583 "mibps": 232.3213520953783, 00:11:50.583 "io_failed": 0, 00:11:50.583 "io_timeout": 0, 00:11:50.583 "avg_latency_us": 4300.167837951638, 00:11:50.583 "min_latency_us": 1763.230476190476, 00:11:50.583 "max_latency_us": 6616.015238095238 00:11:50.583 } 00:11:50.583 ], 00:11:50.583 "core_count": 3 00:11:50.583 } 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@52 -- # killprocess 2071365 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2071365 ']' 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2071365 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@955 -- # uname 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2071365 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2071365' 00:11:50.583 killing process with pid 2071365 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@969 -- # kill 2071365 00:11:50.583 Received shutdown signal, test time was about 30.000000 seconds 00:11:50.583 00:11:50.583 Latency(us) 00:11:50.583 [2024-10-08T22:19:21.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.583 [2024-10-08T22:19:21.218Z] =================================================================================================================== 00:11:50.583 [2024-10-08T22:19:21.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:50.583 [2024-10-09 00:19:21.173837] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 0 00:11:50.583 00:19:21 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@974 -- # wait 2071365 00:11:50.583 [2024-10-09 00:19:21.174612] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x14-0x14, len = 1 00:11:50.583 [2024-10-09 00:19:21.174642] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_STATUS with 0x0 00:11:50.583 [2024-10-09 00:19:21.174654] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0 00:11:50.583 [2024-10-09 00:19:21.174660] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk 00:11:50.583 [2024-10-09 00:19:21.174672] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 0 00:11:50.583 [2024-10-09 00:19:21.174681] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.blk: unmap vq 1 00:11:50.583 [2024-10-09 00:19:21.174688] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting 00:11:50.583 [2024-10-09 00:19:21.175597] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: read bar4 0x14-0x14, len = 1 00:11:50.583 [2024-10-09 00:19:21.175615] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: READ PCI_COMMON_STATUS with 0x0 00:11:50.583 [2024-10-09 00:19:21.175631] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 0 00:11:50.583 [2024-10-09 00:19:21.176609] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2 00:11:50.583 [2024-10-09 00:19:21.176626] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x0 00:11:50.583 [2024-10-09 00:19:21.177621] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2 00:11:50.583 [2024-10-09 00:19:21.177631] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0 00:11:50.583 [2024-10-09 00:19:21.177638] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 0 00:11:50.583 [2024-10-09 00:19:21.177656] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled 00:11:50.583 [2024-10-09 00:19:21.178631] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x16-0x17, len = 2 00:11:50.583 [2024-10-09 00:19:21.178641] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_SELECT with 0x1 00:11:50.583 [2024-10-09 00:19:21.179635] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.blk: write bar4 0x1C-0x1D, len = 2 00:11:50.583 [2024-10-09 00:19:21.179645] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.blk: WRITE PCI_COMMON_Q_ENABLE with 0x0 00:11:50.583 [2024-10-09 00:19:21.179651] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.blk: disable vq 1 00:11:50.583 [2024-10-09 00:19:21.179658] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled 00:11:50.583 [2024-10-09 00:19:21.179690] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.blk 00:11:50.583 [2024-10-09 00:19:21.182220] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000 00:11:50.583 [2024-10-09 00:19:21.212653] virtio_vfio_user.c: 77:virtio_vfio_user_set_status: *DEBUG*: device status 0 00:11:50.583 [2024-10-09 00:19:21.212998] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk 00:11:50.583 [2024-10-09 00:19:21.213021] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started 00:11:50.583 [2024-10-09 00:19:21.213028] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.blk resetting 00:11:50.583 [2024-10-09 00:19:21.213044] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.blk 00:11:50.583 [2024-10-09 00:19:21.213050] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.blk 00:11:50.583 [2024-10-09 00:19:21.213062] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.blk isn't started 00:11:50.583 [2024-10-09 00:19:21.213403] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x14-0x14, len = 1 00:11:50.583 [2024-10-09 00:19:21.213433] vfu_virtio.c: 974:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_STATUS with 0x0 00:11:50.583 [2024-10-09 00:19:21.213441] vfu_virtio.c: 214:virtio_dev_set_status: *DEBUG*: device current status f, set status 0 00:11:50.583 [2024-10-09 00:19:21.213448] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi 00:11:50.583 [2024-10-09 00:19:21.213462] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 0 00:11:50.583 [2024-10-09 00:19:21.213472] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 1 00:11:50.583 [2024-10-09 00:19:21.213477] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 2 00:11:50.583 [2024-10-09 00:19:21.213485] vfu_virtio.c: 116:virtio_dev_unmap_vq: *DEBUG*: vfu.scsi: unmap vq 3 00:11:50.583 [2024-10-09 00:19:21.213490] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting 00:11:50.583 [2024-10-09 00:19:21.214409] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: read bar4 0x14-0x14, len = 1 00:11:50.583 [2024-10-09 00:19:21.214422] vfu_virtio.c:1111:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: READ PCI_COMMON_STATUS with 0x0 00:11:50.583 [2024-10-09 00:19:21.214437] virtio_vfio_user.c: 65:virtio_vfio_user_get_status: *DEBUG*: device status 0 00:11:50.583 [2024-10-09 00:19:21.215412] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:50.583 [2024-10-09 00:19:21.215421] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x0 00:11:50.583 [2024-10-09 00:19:21.216420] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:50.583 [2024-10-09 00:19:21.216428] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0 00:11:50.583 [2024-10-09 00:19:21.216436] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 0 00:11:50.583 [2024-10-09 00:19:21.216442] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 0 isn't enabled 00:11:50.842 [2024-10-09 00:19:21.217425] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:50.842 [2024-10-09 00:19:21.217434] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x1 00:11:50.842 [2024-10-09 00:19:21.218430] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:50.842 [2024-10-09 00:19:21.218438] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0 00:11:50.842 [2024-10-09 00:19:21.218448] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 1 00:11:50.842 [2024-10-09 00:19:21.218453] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 1 isn't enabled 00:11:50.842 [2024-10-09 00:19:21.219440] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:50.842 [2024-10-09 00:19:21.219449] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x2 00:11:50.842 [2024-10-09 00:19:21.220447] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:50.842 [2024-10-09 00:19:21.220455] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0 00:11:50.842 [2024-10-09 00:19:21.220463] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 2 00:11:50.842 [2024-10-09 00:19:21.220469] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 2 isn't enabled 00:11:50.842 [2024-10-09 00:19:21.221447] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x16-0x17, len = 2 00:11:50.842 [2024-10-09 00:19:21.221458] vfu_virtio.c: 986:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_SELECT with 0x3 00:11:50.842 [2024-10-09 00:19:21.222459] vfu_virtio.c:1257:virtio_vfu_access_bar4: *DEBUG*: /tmp/vfu_devices/vfu.scsi: write bar4 0x1C-0x1D, len = 2 00:11:50.842 [2024-10-09 00:19:21.222467] vfu_virtio.c:1003:virtio_vfu_pci_common_cfg: *DEBUG*: /tmp/vfu_devices/vfu.scsi: WRITE PCI_COMMON_Q_ENABLE with 0x0 00:11:50.842 [2024-10-09 00:19:21.222476] vfu_virtio.c: 301:virtio_dev_disable_vq: *DEBUG*: vfu.scsi: disable vq 3 00:11:50.842 [2024-10-09 00:19:21.222481] vfu_virtio.c: 305:virtio_dev_disable_vq: *NOTICE*: Queue 3 isn't enabled 00:11:50.842 [2024-10-09 00:19:21.222509] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /tmp/vfu_devices/vfu.scsi 00:11:50.842 [2024-10-09 00:19:21.225025] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x80000000 00:11:50.842 [2024-10-09 00:19:21.255716] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi 00:11:50.842 [2024-10-09 00:19:21.255733] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started 00:11:50.842 [2024-10-09 00:19:21.255741] vfu_virtio.c: 190:vfu_virtio_dev_reset: *DEBUG*: device vfu.scsi resetting 00:11:50.842 [2024-10-09 00:19:21.255756] vfu_virtio.c:1416:vfu_virtio_detach_device: *DEBUG*: detach device vfu.scsi 00:11:50.842 [2024-10-09 00:19:21.255764] vfu_virtio.c:1388:vfu_virtio_dev_stop: *DEBUG*: stop vfu.scsi 00:11:50.842 [2024-10-09 00:19:21.255769] vfu_virtio.c:1391:vfu_virtio_dev_stop: *DEBUG*: vfu.scsi isn't started 00:11:55.026 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:11:55.026 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.blk 00:11:55.026 [2024-10-09 00:19:25.607747] tgt_endpoint.c: 651:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.blk 00:11:55.026 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@57 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py vfu_virtio_delete_endpoint vfu.scsi 00:11:55.284 [2024-10-09 00:19:25.792459] tgt_endpoint.c: 651:spdk_vfu_delete_endpoint: *NOTICE*: Destruct endpoint vfu.scsi 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- virtio/initiator_bdevperf.sh@59 -- # killprocess 2070896 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2070896 ']' 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2070896 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@955 -- # uname 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2070896 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2070896' 00:11:55.284 killing process with pid 2070896 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@969 -- # kill 2070896 00:11:55.284 00:19:25 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@974 -- # wait 2070896 00:11:58.582 00:11:58.582 real 0m43.576s 00:11:58.582 user 5m3.441s 00:11:58.582 sys 0m2.314s 00:11:58.582 00:19:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.582 00:19:29 vfio_user_qemu.vfio_user_virtio_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:11:58.582 ************************************ 00:11:58.582 END TEST vfio_user_virtio_bdevperf 00:11:58.582 ************************************ 00:11:58.582 00:19:29 vfio_user_qemu -- vfio_user/vfio_user.sh@20 -- # [[ y == y ]] 00:11:58.582 00:19:29 vfio_user_qemu -- vfio_user/vfio_user.sh@21 -- # run_test vfio_user_virtio_fs_fio /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh 00:11:58.582 00:19:29 vfio_user_qemu -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:58.582 00:19:29 vfio_user_qemu -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.582 00:19:29 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:11:58.582 ************************************ 00:11:58.582 START TEST vfio_user_virtio_fs_fio 00:11:58.582 ************************************ 00:11:58.582 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh 00:11:58.582 * Looking for test storage... 00:11:58.582 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:11:58.582 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:58.582 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:58.582 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@344 -- # case "$op" in 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@345 -- # : 1 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # decimal 1 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=1 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 1 00:11:58.841 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # decimal 2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@353 -- # local d=2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@355 -- # echo 2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scripts/common.sh@368 -- # return 0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.842 --rc genhtml_branch_coverage=1 00:11:58.842 --rc genhtml_function_coverage=1 00:11:58.842 --rc genhtml_legend=1 00:11:58.842 --rc geninfo_all_blocks=1 00:11:58.842 --rc geninfo_unexecuted_blocks=1 00:11:58.842 00:11:58.842 ' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.842 --rc genhtml_branch_coverage=1 00:11:58.842 --rc genhtml_function_coverage=1 00:11:58.842 --rc genhtml_legend=1 00:11:58.842 --rc geninfo_all_blocks=1 00:11:58.842 --rc geninfo_unexecuted_blocks=1 00:11:58.842 00:11:58.842 ' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.842 --rc genhtml_branch_coverage=1 00:11:58.842 --rc genhtml_function_coverage=1 00:11:58.842 --rc genhtml_legend=1 00:11:58.842 --rc geninfo_all_blocks=1 00:11:58.842 --rc geninfo_unexecuted_blocks=1 00:11:58.842 00:11:58.842 ' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:58.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.842 --rc genhtml_branch_coverage=1 00:11:58.842 --rc genhtml_function_coverage=1 00:11:58.842 --rc genhtml_legend=1 00:11:58.842 --rc geninfo_all_blocks=1 00:11:58.842 --rc geninfo_unexecuted_blocks=1 00:11:58.842 00:11:58.842 ' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@6 -- # : 128 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@7 -- # : 512 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@6 -- # : false 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@7 -- # : /root/vhost_test 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@8 -- # : /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@9 -- # : qemu-img 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/fio_fs.sh 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@2 -- # vhost_0_main_core=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # check_cgroup 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@10 -- # echo 2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/virtio/common.sh 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@12 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/autotest.config 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@1 -- # vhost_0_reactor_mask='[0-3]' 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@2 -- # vhost_0_main_core=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@4 -- # VM_0_qemu_mask=4-5 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@7 -- # VM_1_qemu_mask=6-7 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@10 -- # VM_2_qemu_mask=8-9 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vfio_user/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # get_vhost_dir 0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:11:58.842 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@14 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@16 -- # vhosttestinit 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@37 -- # '[' '' == iso ']' 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2.gz ]] 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@41 -- # [[ ! -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@46 -- # [[ ! -f /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@18 -- # trap 'error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@20 -- # vfu_tgt_run 0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@6 -- # local vhost_name=0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@7 -- # local vfio_user_dir vfu_pid_file rpc_py 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # get_vhost_dir 0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@9 -- # vfio_user_dir=/root/vhost_test/vhost/0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@10 -- # vfu_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@11 -- # rpc_py='/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock' 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@13 -- # mkdir -p /root/vhost_test/vhost/0 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@15 -- # timing_enter vfu_tgt_start 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@17 -- # vfupid=2078042 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@16 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /root/vhost_test/vhost/0/rpc.sock -m 0xf -s 512 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@18 -- # echo 2078042 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@20 -- # echo 'Process pid: 2078042' 00:11:58.843 Process pid: 2078042 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@21 -- # echo 'waiting for app to run...' 00:11:58.843 waiting for app to run... 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@22 -- # waitforlisten 2078042 /root/vhost_test/vhost/0/rpc.sock 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@831 -- # '[' -z 2078042 ']' 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@835 -- # local rpc_addr=/root/vhost_test/vhost/0/rpc.sock 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock...' 00:11:58.843 Waiting for process to start up and listen on UNIX domain socket /root/vhost_test/vhost/0/rpc.sock... 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.843 00:19:29 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:11:58.843 [2024-10-09 00:19:29.410363] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:11:58.843 [2024-10-09 00:19:29.410453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xf -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078042 ] 00:11:58.843 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.100 [2024-10-09 00:19:29.626043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.358 [2024-10-09 00:19:29.811069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.358 [2024-10-09 00:19:29.811145] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.358 [2024-10-09 00:19:29.811168] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.358 [2024-10-09 00:19:29.811180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@864 -- # return 0 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/common.sh@24 -- # timing_exit vfu_tgt_start 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@22 -- # vfu_vm_dir=/root/vhost_test/vms/vfu_tgt 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@23 -- # rm -rf /root/vhost_test/vms/vfu_tgt 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@24 -- # mkdir -p /root/vhost_test/vms/vfu_tgt 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@27 -- # disk_no=1 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@28 -- # vm_num=1 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@29 -- # job_file=default_fsdev.job 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@30 -- # be_virtiofs_dir=/tmp/vfio-test.1 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@31 -- # vm_virtiofs_dir=/tmp/virtiofs.1 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@33 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_tgt_set_base_path /root/vhost_test/vms/vfu_tgt 00:12:00.291 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@35 -- # rm -rf /tmp/vfio-test.1 00:12:00.548 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@36 -- # mkdir -p /tmp/vfio-test.1 00:12:00.548 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # mktemp --tmpdir=/tmp/vfio-test.1 00:12:00.548 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@39 -- # tmpfile=/tmp/vfio-test.1/tmp.JvhnWjlVdm 00:12:00.548 00:19:30 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@41 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock fsdev_aio_create aio.1 /tmp/vfio-test.1 00:12:00.548 aio.1 00:12:00.548 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /root/vhost_test/vhost/0/rpc.sock vfu_virtio_create_fs_endpoint virtio.1 --fsdev-name aio.1 --tag vfu_test.1 --num-queues=2 --qsize=512 --packed-ring 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@45 -- # vm_setup --disk-type=vfio_user_virtio --force=1 --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disks=1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@511 -- # xtrace_disable 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:12:01.116 WARN: removing existing VM in '/root/vhost_test/vms/1' 00:12:01.116 INFO: Creating new VM in /root/vhost_test/vms/1 00:12:01.116 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:12:01.116 INFO: TASK MASK: 6-7 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@664 -- # local node_num=0 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@665 -- # local boot_disk_present=false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:12:01.116 INFO: NUMA NODE: 0 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@670 -- # [[ -n '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@679 -- # [[ -z '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@684 -- # (( 1 == 0 )) 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@686 -- # (( 1 == 0 )) 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@694 -- # IFS=, 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@695 -- # [[ -z '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@695 -- # disk_type=vfio_user_virtio 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@697 -- # case $disk_type in 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@759 -- # notice 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1' 00:12:01.116 INFO: using socket /root/vhost_test/vms/vfu_tgt/virtio.1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@760 -- # cmd+=(-device "vfio-user-pci,x-msg-timeout=5000,socket=$VM_DIR/vfu_tgt/virtio.$disk") 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@761 -- # [[ 1 == '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@773 -- # [[ -n '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@778 -- # (( 0 )) 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/1/run.sh' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/1/run.sh' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/1/run.sh' 00:12:01.116 INFO: Saving to /root/vhost_test/vms/1/run.sh 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # cat 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 6-7 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :101 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10102,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/1/qemu.pid -serial file:/root/vhost_test/vms/1/serial.log -D /root/vhost_test/vms/1/qemu.log -chardev file,path=/root/vhost_test/vms/1/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10100-:22,hostfwd=tcp::10101-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device vfio-user-pci,x-msg-timeout=5000,socket=/root/vhost_test/vms/vfu_tgt/virtio.1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/1/run.sh 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@820 -- # echo 10100 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@821 -- # echo 10101 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@822 -- # echo 10102 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/1/migration_port 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@825 -- # [[ -z '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@827 -- # echo 10104 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@828 -- # echo 101 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@830 -- # [[ -z '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@831 -- # [[ -z '' ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@46 -- # vm_run 1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@836 -- # local run_all=false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@837 -- # local vms_to_run= 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@839 -- # getopts a-: optchar 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@849 -- # false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@852 -- # shift 0 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@853 -- # for vm in "$@" 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@854 -- # vm_num_is_valid 1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/1/run.sh ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@859 -- # vms_to_run+=' 1' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@864 -- # vm_is_running 1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@366 -- # return 1 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/1/run.sh' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/1/run.sh' 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:01.116 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/1/run.sh' 00:12:01.116 INFO: running /root/vhost_test/vms/1/run.sh 00:12:01.117 00:19:31 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@870 -- # /root/vhost_test/vms/1/run.sh 00:12:01.117 Running VM in /root/vhost_test/vms/1 00:12:01.374 [2024-10-09 00:19:31.777379] tgt_endpoint.c: 165:tgt_accept_poller: *NOTICE*: /root/vhost_test/vms/vfu_tgt/virtio.1: attached successfully 00:12:01.374 Waiting for QEMU pid file 00:12:02.310 === qemu.log === 00:12:02.310 === qemu.log === 00:12:02.310 00:19:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@47 -- # vm_wait_for_boot 60 1 00:12:02.310 00:19:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@906 -- # assert_number 60 00:12:02.310 00:19:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@274 -- # [[ 60 =~ [0-9]+ ]] 00:12:02.310 00:19:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@274 -- # return 0 00:12:02.311 00:19:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@908 -- # xtrace_disable 00:12:02.311 00:19:32 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:12:02.311 INFO: Waiting for VMs to boot 00:12:02.311 INFO: waiting for VM1 (/root/vhost_test/vms/1) 00:12:24.231 00:12:24.231 INFO: VM1 ready 00:12:24.231 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:24.231 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:24.231 INFO: all VMs ready 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@966 -- # return 0 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@49 -- # vm_exec 1 'mkdir /tmp/virtiofs.1' 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:24.231 00:19:54 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mkdir /tmp/virtiofs.1' 00:12:24.231 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@50 -- # vm_exec 1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1' 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:24.489 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'mount -t virtiofs vfu_test.1 /tmp/virtiofs.1' 00:12:24.489 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # basename /tmp/vfio-test.1/tmp.JvhnWjlVdm 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@52 -- # vm_exec 1 'ls /tmp/virtiofs.1/tmp.JvhnWjlVdm' 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:24.746 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'ls /tmp/virtiofs.1/tmp.JvhnWjlVdm' 00:12:24.746 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:25.004 /tmp/virtiofs.1/tmp.JvhnWjlVdm 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@53 -- # vm_start_fio_server --fio-bin=/usr/src/fio-static/fio 1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@970 -- # local OPTIND optchar 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@971 -- # local readonly= 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@972 -- # local fio_bin= 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@974 -- # case "$optchar" in 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@976 -- # case "$OPTARG" in 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@977 -- # local fio_bin=/usr/src/fio-static/fio 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@973 -- # getopts :-: optchar 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@986 -- # shift 1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@987 -- # for vm_num in "$@" 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@988 -- # notice 'Starting fio server on VM1' 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Starting fio server on VM1' 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Starting fio server on VM1' 00:12:25.004 INFO: Starting fio server on VM1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@989 -- # [[ /usr/src/fio-static/fio != '' ]] 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@990 -- # vm_exec 1 'cat > /root/fio; chmod +x /root/fio' 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:25.004 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/fio; chmod +x /root/fio' 00:12:25.004 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@991 -- # vm_exec 1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:25.262 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 /root/fio --eta=never --server --daemonize=/root/fio.pid 00:12:25.262 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@54 -- # run_fio --fio-bin=/usr/src/fio-static/fio --job-file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job --out=/root/vhost_test/fio_results --vm=1:/tmp/virtiofs.1/test 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1046 -- # local arg 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1047 -- # local job_file= 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1048 -- # local fio_bin= 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1049 -- # vms=() 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1049 -- # local vms 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1050 -- # local out= 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1051 -- # local vm 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1052 -- # local run_server_mode=true 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1053 -- # local run_plugin_mode=false 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1054 -- # local fio_start_cmd 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1055 -- # local fio_output_format=normal 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1056 -- # local fio_gtod_reduce=false 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1057 -- # local wait_for_fio=true 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:12:25.520 00:19:55 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1062 -- # local fio_bin=/usr/src/fio-static/fio 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1061 -- # local job_file=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1065 -- # local out=/root/vhost_test/fio_results 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1066 -- # mkdir -p /root/vhost_test/fio_results 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1059 -- # for arg in "$@" 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1060 -- # case "$arg" in 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1063 -- # vms+=("${arg#*=}") 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1085 -- # [[ -n /usr/src/fio-static/fio ]] 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1085 -- # [[ ! -r /usr/src/fio-static/fio ]] 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1090 -- # [[ -z /usr/src/fio-static/fio ]] 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1094 -- # [[ ! -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job ]] 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1099 -- # fio_start_cmd='/usr/src/fio-static/fio --eta=never ' 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1101 -- # local job_fname 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1102 -- # basename /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1102 -- # job_fname=default_fsdev.job 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1103 -- # log_fname=default_fsdev.log 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1104 -- # fio_start_cmd+=' --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal ' 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1107 -- # for vm in "${vms[@]}" 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1108 -- # local vm_num=1 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1109 -- # local vmdisks=/tmp/virtiofs.1/test 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1111 -- # sed 's@filename=@filename=/tmp/virtiofs.1/test@;s@description=\(.*\)@description=\1 (VM=1)@' /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/fio_jobs/default_fsdev.job 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1112 -- # vm_exec 1 'cat > /root/default_fsdev.job' 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:25.520 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'cat > /root/default_fsdev.job' 00:12:25.520 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1114 -- # false 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1118 -- # vm_exec 1 cat /root/default_fsdev.job 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:25.778 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 cat /root/default_fsdev.job 00:12:25.778 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:26.058 [global] 00:12:26.058 blocksize=4k 00:12:26.058 iodepth=512 00:12:26.058 ioengine=libaio 00:12:26.058 size=1G 00:12:26.058 group_reporting 00:12:26.058 thread 00:12:26.058 numjobs=1 00:12:26.058 direct=1 00:12:26.058 invalidate=1 00:12:26.058 rw=randrw 00:12:26.058 do_verify=1 00:12:26.058 filename=/tmp/virtiofs.1/test 00:12:26.058 [job0] 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1120 -- # true 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # vm_fio_socket 1 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@319 -- # vm_num_is_valid 1 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@320 -- # local vm_dir=/root/vhost_test/vms/1 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@322 -- # cat /root/vhost_test/vms/1/fio_socket 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1121 -- # fio_start_cmd+='--client=127.0.0.1,10101 --remote-config /root/default_fsdev.job ' 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1124 -- # true 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1140 -- # true 00:12:26.058 00:19:56 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1154 -- # /usr/src/fio-static/fio --eta=never --output=/root/vhost_test/fio_results/default_fsdev.log --output-format=normal --client=127.0.0.1,10101 --remote-config /root/default_fsdev.job 00:12:44.124 00:20:13 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1155 -- # sleep 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1157 -- # [[ normal == \j\s\o\n ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1165 -- # [[ ! -n '' ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@1166 -- # cat /root/vhost_test/fio_results/default_fsdev.log 00:12:44.124 hostname=vhostfedora-cloud-23052, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-3.35, flags=1 00:12:44.124 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=512 00:12:44.124 Starting 1 thread 00:12:44.124 job0: Laying out IO file (1 file / 1024MiB) 00:12:44.124 00:12:44.124 job0: (groupid=0, jobs=1): err= 0: pid=968: Wed Oct 9 00:20:13 2024 00:12:44.124 read: IOPS=43.4k, BW=169MiB/s (178MB/s)(512MiB/3022msec) 00:12:44.124 slat (nsec): min=1429, max=271794, avg=3156.29, stdev=1651.98 00:12:44.124 clat (usec): min=1322, max=12849, avg=5933.11, stdev=340.03 00:12:44.124 lat (usec): min=1324, max=12853, avg=5936.27, stdev=340.05 00:12:44.124 clat percentiles (usec): 00:12:44.124 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:12:44.124 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:12:44.124 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 5997], 00:12:44.124 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[10945], 99.95th=[11863], 00:12:44.124 | 99.99th=[12518] 00:12:44.124 bw ( KiB/s): min=172400, max=174664, per=100.00%, avg=173494.67, stdev=874.33, samples=6 00:12:44.124 iops : min=43100, max=43666, avg=43373.67, stdev=218.58, samples=6 00:12:44.124 write: IOPS=43.4k, BW=169MiB/s (178MB/s)(512MiB/3022msec); 0 zone resets 00:12:44.124 slat (nsec): min=1609, max=726851, avg=3517.47, stdev=2735.53 00:12:44.124 clat (usec): min=1271, max=11347, avg=5858.28, stdev=310.83 00:12:44.124 lat (usec): min=1273, max=11351, avg=5861.80, stdev=310.81 00:12:44.124 clat percentiles (usec): 00:12:44.124 | 1.00th=[ 5342], 5.00th=[ 5800], 10.00th=[ 5800], 20.00th=[ 5800], 00:12:44.124 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5866], 60.00th=[ 5866], 00:12:44.124 | 70.00th=[ 5866], 80.00th=[ 5866], 90.00th=[ 5932], 95.00th=[ 5932], 00:12:44.124 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[10159], 99.95th=[10814], 00:12:44.124 | 99.99th=[11338] 00:12:44.124 bw ( KiB/s): min=172272, max=175248, per=100.00%, avg=173554.67, stdev=1085.02, samples=6 00:12:44.124 iops : min=43068, max=43812, avg=43388.67, stdev=271.25, samples=6 00:12:44.124 lat (msec) : 2=0.07%, 4=0.29%, 10=99.49%, 20=0.15% 00:12:44.124 cpu : usr=10.82%, sys=30.69%, ctx=8323, majf=0, minf=7 00:12:44.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:12:44.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.124 issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.124 latency : target=0, window=0, percentile=100.00%, depth=512 00:12:44.124 00:12:44.124 Run status group 0 (all jobs): 00:12:44.124 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=512MiB (537MB), run=3022-3022msec 00:12:44.124 WRITE: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=512MiB (537MB), run=3022-3022msec 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@55 -- # vm_exec 1 'umount /tmp/virtiofs.1' 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'umount /tmp/virtiofs.1' 00:12:44.124 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@58 -- # notice 'Shutting down virtual machine...' 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine...' 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine...' 00:12:44.124 INFO: Shutting down virtual machine... 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@59 -- # vm_shutdown_all 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@480 -- # local timeo=90 vms vm 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@482 -- # vms=($(vm_list_all)) 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@482 -- # vm_list_all 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@459 -- # vms=() 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@459 -- # local vms 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@484 -- # for vm in "${vms[@]}" 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@485 -- # vm_shutdown 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@410 -- # vm_num_is_valid 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@411 -- # local vm_dir=/root/vhost_test/vms/1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@412 -- # [[ ! -d /root/vhost_test/vms/1 ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@417 -- # vm_is_running 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:12:44.124 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # local vm_pid 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # vm_pid=2078419 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # /bin/kill -0 2078419 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 0 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@424 -- # notice 'Shutting down virtual machine /root/vhost_test/vms/1' 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Shutting down virtual machine /root/vhost_test/vms/1' 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Shutting down virtual machine /root/vhost_test/vms/1' 00:12:44.125 INFO: Shutting down virtual machine /root/vhost_test/vms/1 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@425 -- # set +e 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@426 -- # vm_exec 1 'nohup sh -c '\''shutdown -h -P now'\''' 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@329 -- # vm_num_is_valid 1 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@331 -- # local vm_num=1 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@332 -- # shift 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # vm_ssh_socket 1 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@312 -- # vm_num_is_valid 1 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/1 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/1/ssh_socket 00:12:44.125 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10100 127.0.0.1 'nohup sh -c '\''shutdown -h -P now'\''' 00:12:44.125 Warning: Permanently added '[127.0.0.1]:10100' (ED25519) to the list of known hosts. 00:12:44.381 Connection to 127.0.0.1 closed by remote host. 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@426 -- # true 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@427 -- # notice 'VM1 is shutting down - wait a while to complete' 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'VM1 is shutting down - wait a while to complete' 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: VM1 is shutting down - wait a while to complete' 00:12:44.381 INFO: VM1 is shutting down - wait a while to complete 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@428 -- # set -e 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@488 -- # notice 'Waiting for VMs to shutdown...' 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'Waiting for VMs to shutdown...' 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: Waiting for VMs to shutdown...' 00:12:44.381 INFO: Waiting for VMs to shutdown... 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # vm_is_running 1 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@369 -- # local vm_pid 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # cat /root/vhost_test/vms/1/qemu.pid 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@370 -- # vm_pid=2078419 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@372 -- # /bin/kill -0 2078419 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@373 -- # return 0 00:12:44.381 00:20:14 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@493 -- # sleep 1 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 1 > 0 )) 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@490 -- # for vm in "${!vms[@]}" 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # vm_is_running 1 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@362 -- # vm_num_is_valid 1 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@302 -- # return 0 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/1 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@366 -- # return 1 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@491 -- # unset -v 'vms[vm]' 00:12:45.310 00:20:15 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@493 -- # sleep 1 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@489 -- # (( timeo-- > 0 && 0 > 0 )) 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@496 -- # (( 0 == 0 )) 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@497 -- # notice 'All VMs successfully shut down' 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'All VMs successfully shut down' 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: All VMs successfully shut down' 00:12:46.680 INFO: All VMs successfully shut down 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@498 -- # return 0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@61 -- # vhost_kill 0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@202 -- # local rc=0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@203 -- # local vhost_name=0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@205 -- # [[ -z 0 ]] 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@210 -- # local vhost_dir 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # get_vhost_dir 0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@105 -- # local vhost_name=0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@107 -- # [[ -z 0 ]] 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@112 -- # echo /root/vhost_test/vhost/0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@211 -- # vhost_dir=/root/vhost_test/vhost/0 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@212 -- # local vhost_pid_file=/root/vhost_test/vhost/0/vhost.pid 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@214 -- # [[ ! -r /root/vhost_test/vhost/0/vhost.pid ]] 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@219 -- # timing_enter vhost_kill 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@220 -- # local vhost_pid 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # cat /root/vhost_test/vhost/0/vhost.pid 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@221 -- # vhost_pid=2078042 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@222 -- # notice 'killing vhost (PID 2078042) app' 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'killing vhost (PID 2078042) app' 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: killing vhost (PID 2078042) app' 00:12:46.680 INFO: killing vhost (PID 2078042) app 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@224 -- # kill -INT 2078042 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@225 -- # notice 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@94 -- # message INFO 'sent SIGINT to vhost app - waiting 60 seconds to exit' 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@60 -- # local verbose_out 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@61 -- # false 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@62 -- # verbose_out= 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@69 -- # local msg_type=INFO 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@70 -- # shift 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@71 -- # echo -e 'INFO: sent SIGINT to vhost app - waiting 60 seconds to exit' 00:12:46.680 INFO: sent SIGINT to vhost app - waiting 60 seconds to exit 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i = 0 )) 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 )) 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 2078042 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo . 00:12:46.680 . 00:12:46.680 00:20:16 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1 00:12:47.612 00:20:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ )) 00:12:47.612 00:20:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 )) 00:12:47.612 00:20:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 2078042 00:12:47.612 00:20:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo . 00:12:47.612 . 00:12:47.612 00:20:17 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1 00:12:47.612 [2024-10-09 00:20:18.029322] vfu_virtio_fs.c: 300:_vfu_virtio_fs_fuse_dispatcher_delete_cpl: *NOTICE*: FUSE dispatcher deleted 00:12:48.544 00:20:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ )) 00:12:48.544 00:20:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 )) 00:12:48.544 00:20:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 2078042 00:12:48.544 00:20:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@228 -- # echo . 00:12:48.544 . 00:12:48.544 00:20:18 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@229 -- # sleep 1 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i++ )) 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@226 -- # (( i < 60 )) 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@227 -- # kill -0 2078042 00:12:49.511 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 227: kill: (2078042) - No such process 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@231 -- # break 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@234 -- # kill -0 2078042 00:12:49.511 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 234: kill: (2078042) - No such process 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@239 -- # kill -0 2078042 00:12:49.511 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh: line 239: kill: (2078042) - No such process 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@250 -- # timing_exit vhost_kill 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:49.511 00:20:19 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:12:49.511 00:20:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@252 -- # rm -rf /root/vhost_test/vhost/0 00:12:49.511 00:20:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@254 -- # return 0 00:12:49.511 00:20:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- virtio/fio_fs.sh@63 -- # vhosttestfini 00:12:49.511 00:20:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- vhost/common.sh@54 -- # '[' '' == iso ']' 00:12:49.511 00:12:49.511 real 0m50.930s 00:12:49.511 user 3m21.024s 00:12:49.511 sys 0m2.560s 00:12:49.511 00:20:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.511 00:20:20 vfio_user_qemu.vfio_user_virtio_fs_fio -- common/autotest_common.sh@10 -- # set +x 00:12:49.511 ************************************ 00:12:49.511 END TEST vfio_user_virtio_fs_fio 00:12:49.511 ************************************ 00:12:49.511 00:20:20 vfio_user_qemu -- vfio_user/vfio_user.sh@26 -- # vhosttestfini 00:12:49.511 00:20:20 vfio_user_qemu -- vhost/common.sh@54 -- # '[' iso == iso ']' 00:12:49.511 00:20:20 vfio_user_qemu -- vhost/common.sh@55 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/setup.sh reset 00:12:52.045 Waiting for block devices as requested 00:12:52.302 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:12:52.302 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:52.302 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:52.560 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:52.560 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:52.560 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:52.560 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:52.819 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:52.819 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:52.819 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:52.819 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:53.078 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:53.078 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:53.078 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:53.337 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:53.337 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:53.337 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:53.601 00:12:53.601 real 6m14.815s 00:12:53.601 user 26m27.453s 00:12:53.601 sys 0m18.909s 00:12:53.601 00:20:24 vfio_user_qemu -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.601 00:20:24 vfio_user_qemu -- common/autotest_common.sh@10 -- # set +x 00:12:53.601 ************************************ 00:12:53.601 END TEST vfio_user_qemu 00:12:53.601 ************************************ 00:12:53.601 00:20:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:12:53.601 00:20:24 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:12:53.601 00:20:24 -- spdk/autotest.sh@366 -- # [[ 1 -eq 1 ]] 00:12:53.601 00:20:24 -- spdk/autotest.sh@367 -- # run_test sma /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh 00:12:53.601 00:20:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:53.601 00:20:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.601 00:20:24 -- common/autotest_common.sh@10 -- # set +x 00:12:53.601 ************************************ 00:12:53.602 START TEST sma 00:12:53.602 ************************************ 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/sma.sh 00:12:53.602 * Looking for test storage... 00:12:53.602 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1681 -- # lcov --version 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:53.602 00:20:24 sma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.602 00:20:24 sma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.602 00:20:24 sma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.602 00:20:24 sma -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.602 00:20:24 sma -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.602 00:20:24 sma -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.602 00:20:24 sma -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.602 00:20:24 sma -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.602 00:20:24 sma -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.602 00:20:24 sma -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.602 00:20:24 sma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.602 00:20:24 sma -- scripts/common.sh@344 -- # case "$op" in 00:12:53.602 00:20:24 sma -- scripts/common.sh@345 -- # : 1 00:12:53.602 00:20:24 sma -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.602 00:20:24 sma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.602 00:20:24 sma -- scripts/common.sh@365 -- # decimal 1 00:12:53.602 00:20:24 sma -- scripts/common.sh@353 -- # local d=1 00:12:53.602 00:20:24 sma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.602 00:20:24 sma -- scripts/common.sh@355 -- # echo 1 00:12:53.602 00:20:24 sma -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.602 00:20:24 sma -- scripts/common.sh@366 -- # decimal 2 00:12:53.602 00:20:24 sma -- scripts/common.sh@353 -- # local d=2 00:12:53.602 00:20:24 sma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.602 00:20:24 sma -- scripts/common.sh@355 -- # echo 2 00:12:53.602 00:20:24 sma -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.602 00:20:24 sma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.602 00:20:24 sma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.602 00:20:24 sma -- scripts/common.sh@368 -- # return 0 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:53.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.602 --rc genhtml_branch_coverage=1 00:12:53.602 --rc genhtml_function_coverage=1 00:12:53.602 --rc genhtml_legend=1 00:12:53.602 --rc geninfo_all_blocks=1 00:12:53.602 --rc geninfo_unexecuted_blocks=1 00:12:53.602 00:12:53.602 ' 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:53.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.602 --rc genhtml_branch_coverage=1 00:12:53.602 --rc genhtml_function_coverage=1 00:12:53.602 --rc genhtml_legend=1 00:12:53.602 --rc geninfo_all_blocks=1 00:12:53.602 --rc geninfo_unexecuted_blocks=1 00:12:53.602 00:12:53.602 ' 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:53.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.602 --rc genhtml_branch_coverage=1 00:12:53.602 --rc genhtml_function_coverage=1 00:12:53.602 --rc genhtml_legend=1 00:12:53.602 --rc geninfo_all_blocks=1 00:12:53.602 --rc geninfo_unexecuted_blocks=1 00:12:53.602 00:12:53.602 ' 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:53.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.602 --rc genhtml_branch_coverage=1 00:12:53.602 --rc genhtml_function_coverage=1 00:12:53.602 --rc genhtml_legend=1 00:12:53.602 --rc geninfo_all_blocks=1 00:12:53.602 --rc geninfo_unexecuted_blocks=1 00:12:53.602 00:12:53.602 ' 00:12:53.602 00:20:24 sma -- sma/sma.sh@11 -- # run_test sma_nvmf_tcp /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.602 00:20:24 sma -- common/autotest_common.sh@10 -- # set +x 00:12:53.862 ************************************ 00:12:53.862 START TEST sma_nvmf_tcp 00:12:53.862 ************************************ 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/nvmf_tcp.sh 00:12:53.862 * Looking for test storage... 00:12:53.862 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:12:53.862 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.863 --rc genhtml_branch_coverage=1 00:12:53.863 --rc genhtml_function_coverage=1 00:12:53.863 --rc genhtml_legend=1 00:12:53.863 --rc geninfo_all_blocks=1 00:12:53.863 --rc geninfo_unexecuted_blocks=1 00:12:53.863 00:12:53.863 ' 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.863 --rc genhtml_branch_coverage=1 00:12:53.863 --rc genhtml_function_coverage=1 00:12:53.863 --rc genhtml_legend=1 00:12:53.863 --rc geninfo_all_blocks=1 00:12:53.863 --rc geninfo_unexecuted_blocks=1 00:12:53.863 00:12:53.863 ' 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.863 --rc genhtml_branch_coverage=1 00:12:53.863 --rc genhtml_function_coverage=1 00:12:53.863 --rc genhtml_legend=1 00:12:53.863 --rc geninfo_all_blocks=1 00:12:53.863 --rc geninfo_unexecuted_blocks=1 00:12:53.863 00:12:53.863 ' 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:53.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.863 --rc genhtml_branch_coverage=1 00:12:53.863 --rc genhtml_function_coverage=1 00:12:53.863 --rc genhtml_legend=1 00:12:53.863 --rc geninfo_all_blocks=1 00:12:53.863 --rc geninfo_unexecuted_blocks=1 00:12:53.863 00:12:53.863 ' 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@70 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@73 -- # tgtpid=2088290 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@83 -- # smapid=2088291 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@86 -- # sma_waitforlisten 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/common.sh@8 -- # local sma_port=8080 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i = 0 )) 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 )) 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@72 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@75 -- # cat 00:12:53.863 00:20:24 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s 00:12:53.863 [2024-10-09 00:20:24.491568] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:12:53.863 [2024-10-09 00:20:24.491658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088290 ] 00:12:54.121 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.121 [2024-10-09 00:20:24.595865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.379 [2024-10-09 00:20:24.796677] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.945 00:20:25 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ )) 00:12:54.945 00:20:25 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 )) 00:12:54.945 00:20:25 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:12:54.945 00:20:25 sma.sma_nvmf_tcp -- sma/common.sh@14 -- # sleep 1s 00:12:55.203 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:55.203 I0000 00:00:1728426025.615458 2088291 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:55.203 [2024-10-09 00:20:25.628433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.135 00:20:26 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i++ )) 00:12:56.135 00:20:26 sma.sma_nvmf_tcp -- sma/common.sh@10 -- # (( i < 5 )) 00:12:56.135 00:20:26 sma.sma_nvmf_tcp -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:12:56.135 00:20:26 sma.sma_nvmf_tcp -- sma/common.sh@12 -- # return 0 00:12:56.135 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@89 -- # rpc_cmd bdev_null_create null0 100 4096 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.136 null0 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@92 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.136 [ 00:12:56.136 { 00:12:56.136 "trtype": "TCP", 00:12:56.136 "max_queue_depth": 128, 00:12:56.136 "max_io_qpairs_per_ctrlr": 127, 00:12:56.136 "in_capsule_data_size": 4096, 00:12:56.136 "max_io_size": 131072, 00:12:56.136 "io_unit_size": 131072, 00:12:56.136 "max_aq_depth": 128, 00:12:56.136 "num_shared_buffers": 511, 00:12:56.136 "buf_cache_size": 4294967295, 00:12:56.136 "dif_insert_or_strip": false, 00:12:56.136 "zcopy": false, 00:12:56.136 "c2h_success": true, 00:12:56.136 "sock_priority": 0, 00:12:56.136 "abort_timeout_sec": 1, 00:12:56.136 "ack_timeout": 0, 00:12:56.136 "data_wr_pool_size": 0 00:12:56.136 } 00:12:56.136 ] 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # create_device nqn.2016-06.io.spdk:cnode0 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # jq -r .handle 00:12:56.136 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:56.136 I0000 00:00:1728426026.697632 2088559 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:56.136 I0000 00:00:1728426026.699176 2088559 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:56.136 I0000 00:00:1728426026.700873 2088562 subchannel.cc:806] subchannel 0x5651ed128220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5651ed039670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5651ed0c7cc0, grpc.internal.client_channel_call_destination=0x7fa01be28390, grpc.internal.event_engine=0x5651ed055360, grpc.internal.security_connector=0x5651ecfe16e0, grpc.internal.subchannel_pool=0x5651ed14fcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5651ecf1e5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:26.699851597+02:00"}), backing off for 1000 ms 00:12:56.136 [2024-10-09 00:20:26.719814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@95 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@96 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.136 [ 00:12:56.136 { 00:12:56.136 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:12:56.136 "subtype": "NVMe", 00:12:56.136 "listen_addresses": [ 00:12:56.136 { 00:12:56.136 "trtype": "TCP", 00:12:56.136 "adrfam": "IPv4", 00:12:56.136 "traddr": "127.0.0.1", 00:12:56.136 "trsvcid": "4420" 00:12:56.136 } 00:12:56.136 ], 00:12:56.136 "allow_any_host": false, 00:12:56.136 "hosts": [], 00:12:56.136 "serial_number": "00000000000000000000", 00:12:56.136 "model_number": "SPDK bdev Controller", 00:12:56.136 "max_namespaces": 32, 00:12:56.136 "min_cntlid": 1, 00:12:56.136 "max_cntlid": 65519, 00:12:56.136 "namespaces": [] 00:12:56.136 } 00:12:56.136 ] 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # create_device nqn.2016-06.io.spdk:cnode1 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # jq -r .handle 00:12:56.136 00:20:26 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:56.394 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:56.394 I0000 00:00:1728426026.947450 2088603 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:56.394 I0000 00:00:1728426026.949032 2088603 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:56.394 I0000 00:00:1728426026.950718 2088709 subchannel.cc:806] subchannel 0x55cc5466a220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cc5457b670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cc54609cc0, grpc.internal.client_channel_call_destination=0x7f233424b390, grpc.internal.event_engine=0x55cc54597360, grpc.internal.security_connector=0x55cc545236e0, grpc.internal.subchannel_pool=0x55cc54691cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cc544605c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:26.949697854+02:00"}), backing off for 1000 ms 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@98 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@99 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.394 [ 00:12:56.394 { 00:12:56.394 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:12:56.394 "subtype": "NVMe", 00:12:56.394 "listen_addresses": [ 00:12:56.394 { 00:12:56.394 "trtype": "TCP", 00:12:56.394 "adrfam": "IPv4", 00:12:56.394 "traddr": "127.0.0.1", 00:12:56.394 "trsvcid": "4420" 00:12:56.394 } 00:12:56.394 ], 00:12:56.394 "allow_any_host": false, 00:12:56.394 "hosts": [], 00:12:56.394 "serial_number": "00000000000000000000", 00:12:56.394 "model_number": "SPDK bdev Controller", 00:12:56.394 "max_namespaces": 32, 00:12:56.394 "min_cntlid": 1, 00:12:56.394 "max_cntlid": 65519, 00:12:56.394 "namespaces": [] 00:12:56.394 } 00:12:56.394 ] 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@100 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.394 [ 00:12:56.394 { 00:12:56.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:56.394 "subtype": "NVMe", 00:12:56.394 "listen_addresses": [ 00:12:56.394 { 00:12:56.394 "trtype": "TCP", 00:12:56.394 "adrfam": "IPv4", 00:12:56.394 "traddr": "127.0.0.1", 00:12:56.394 "trsvcid": "4420" 00:12:56.394 } 00:12:56.394 ], 00:12:56.394 "allow_any_host": false, 00:12:56.394 "hosts": [], 00:12:56.394 "serial_number": "00000000000000000000", 00:12:56.394 "model_number": "SPDK bdev Controller", 00:12:56.394 "max_namespaces": 32, 00:12:56.394 "min_cntlid": 1, 00:12:56.394 "max_cntlid": 65519, 00:12:56.394 "namespaces": [] 00:12:56.394 } 00:12:56.394 ] 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@101 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 != \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # rpc_cmd nvmf_get_subsystems 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.394 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # jq -r '. | length' 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@104 -- # [[ 3 -eq 3 ]] 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # create_device nqn.2016-06.io.spdk:cnode0 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # jq -r .handle 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:56.652 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:56.652 I0000 00:00:1728426027.243853 2088807 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:56.652 I0000 00:00:1728426027.245418 2088807 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:56.652 I0000 00:00:1728426027.247139 2088833 subchannel.cc:806] subchannel 0x5617ed828220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5617ed739670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5617ed7c7cc0, grpc.internal.client_channel_call_destination=0x7fb3c148c390, grpc.internal.event_engine=0x5617ed755360, grpc.internal.security_connector=0x5617ed6e16e0, grpc.internal.subchannel_pool=0x5617ed84fcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5617ed61e5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:27.246116989+02:00"}), backing off for 999 ms 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@108 -- # tmp0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # create_device nqn.2016-06.io.spdk:cnode1 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:56.652 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # jq -r .handle 00:12:56.917 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:56.917 I0000 00:00:1728426027.454817 2088856 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:56.917 I0000 00:00:1728426027.456357 2088856 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:56.918 I0000 00:00:1728426027.458033 2088857 subchannel.cc:806] subchannel 0x560a344de220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560a343ef670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560a3447dcc0, grpc.internal.client_channel_call_destination=0x7f9c405fe390, grpc.internal.event_engine=0x560a3440b360, grpc.internal.security_connector=0x560a343976e0, grpc.internal.subchannel_pool=0x560a34505cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560a342d45c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:27.457024432+02:00"}), backing off for 999 ms 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@109 -- # tmp1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # rpc_cmd nvmf_get_subsystems 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # jq -r '. | length' 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@111 -- # [[ 3 -eq 3 ]] 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@112 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode0 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@113 -- # [[ nvmf-tcp:nqn.2016-06.io.spdk:cnode1 == \n\v\m\f\-\t\c\p\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@116 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:12:56.918 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:57.181 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:57.181 I0000 00:00:1728426027.709757 2088880 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:57.181 I0000 00:00:1728426027.711321 2088880 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:57.181 I0000 00:00:1728426027.712976 2088882 subchannel.cc:806] subchannel 0x561729636220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561729547670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5617295d5cc0, grpc.internal.client_channel_call_destination=0x7fc5565d7390, grpc.internal.event_engine=0x5617294e5190, grpc.internal.security_connector=0x56172964beb0, grpc.internal.subchannel_pool=0x56172965dcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56172942c5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:27.711966614+02:00"}), backing off for 1000 ms 00:12:57.181 {} 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@117 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@650 -- # local es=0 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.181 [2024-10-09 00:20:27.774434] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0' does not exist 00:12:57.181 request: 00:12:57.181 { 00:12:57.181 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:12:57.181 "method": "nvmf_get_subsystems", 00:12:57.181 "req_id": 1 00:12:57.181 } 00:12:57.181 Got JSON-RPC error response 00:12:57.181 response: 00:12:57.181 { 00:12:57.181 "code": -19, 00:12:57.181 "message": "No such device" 00:12:57.181 } 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@653 -- # es=1 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # rpc_cmd nvmf_get_subsystems 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # jq -r '. | length' 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.181 00:20:27 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.439 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@118 -- # [[ 2 -eq 2 ]] 00:12:57.439 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@120 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1 00:12:57.439 00:20:27 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:57.439 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:57.439 I0000 00:00:1728426027.998496 2088908 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:57.439 I0000 00:00:1728426028.000020 2088908 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:57.439 I0000 00:00:1728426028.001668 2088912 subchannel.cc:806] subchannel 0x561d53fa5220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x561d53eb6670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x561d53f44cc0, grpc.internal.client_channel_call_destination=0x7fb1a34ba390, grpc.internal.event_engine=0x561d53e54190, grpc.internal.security_connector=0x561d53fbaeb0, grpc.internal.subchannel_pool=0x561d53fcccc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x561d53d9b5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:28.00065035+02:00"}), backing off for 1000 ms 00:12:57.439 {} 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@121 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@650 -- # local es=0 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.439 [2024-10-09 00:20:28.051204] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode1' does not exist 00:12:57.439 request: 00:12:57.439 { 00:12:57.439 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.439 "method": "nvmf_get_subsystems", 00:12:57.439 "req_id": 1 00:12:57.439 } 00:12:57.439 Got JSON-RPC error response 00:12:57.439 response: 00:12:57.439 { 00:12:57.439 "code": -19, 00:12:57.439 "message": "No such device" 00:12:57.439 } 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@653 -- # es=1 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:57.439 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # rpc_cmd nvmf_get_subsystems 00:12:57.440 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # jq -r '. | length' 00:12:57.440 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.440 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:57.440 00:20:28 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.698 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@122 -- # [[ 1 -eq 1 ]] 00:12:57.698 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@125 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:12:57.698 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:57.698 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:57.698 I0000 00:00:1728426028.287429 2088939 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:57.698 I0000 00:00:1728426028.288848 2088939 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:57.698 I0000 00:00:1728426028.290556 2089049 subchannel.cc:806] subchannel 0x55d9c0aa2220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d9c09b3670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d9c0a41cc0, grpc.internal.client_channel_call_destination=0x7f4007aa2390, grpc.internal.event_engine=0x55d9c0951190, grpc.internal.security_connector=0x55d9c0ab7eb0, grpc.internal.subchannel_pool=0x55d9c0ac9cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d9c08985c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:28.289544703+02:00"}), backing off for 1000 ms 00:12:57.698 {} 00:12:57.698 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@126 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode1 00:12:57.698 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:57.955 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:57.955 I0000 00:00:1728426028.512146 2089087 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:57.955 I0000 00:00:1728426028.513497 2089087 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:57.955 I0000 00:00:1728426028.515138 2089172 subchannel.cc:806] subchannel 0x562737ec2220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562737dd3670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562737e61cc0, grpc.internal.client_channel_call_destination=0x7f37653f8390, grpc.internal.event_engine=0x562737d71190, grpc.internal.security_connector=0x562737ed7eb0, grpc.internal.subchannel_pool=0x562737ee9cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562737cb85c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:28.514122362+02:00"}), backing off for 999 ms 00:12:57.955 {} 00:12:57.955 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # create_device nqn.2016-06.io.spdk:cnode0 00:12:57.955 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # jq -r .handle 00:12:57.956 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:58.213 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:58.213 I0000 00:00:1728426028.725438 2089195 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:58.213 I0000 00:00:1728426028.727047 2089195 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:58.213 I0000 00:00:1728426028.728785 2089200 subchannel.cc:806] subchannel 0x56299c263220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56299c174670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56299c202cc0, grpc.internal.client_channel_call_destination=0x7fba1fdc3390, grpc.internal.event_engine=0x56299c190360, grpc.internal.security_connector=0x56299c11c6e0, grpc.internal.subchannel_pool=0x56299c28acc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56299c0595c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:28.727769596+02:00"}), backing off for 1000 ms 00:12:58.213 [2024-10-09 00:20:28.749528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:58.213 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@129 -- # devid0=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:12:58.213 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # create_device nqn.2016-06.io.spdk:cnode1 00:12:58.213 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # jq -r .handle 00:12:58.214 00:20:28 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:58.472 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:58.472 I0000 00:00:1728426028.970900 2089223 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:58.472 I0000 00:00:1728426028.972392 2089223 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:58.472 I0000 00:00:1728426028.974092 2089227 subchannel.cc:806] subchannel 0x55fb00e69220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55fb00d7a670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55fb00e08cc0, grpc.internal.client_channel_call_destination=0x7fa36f96f390, grpc.internal.event_engine=0x55fb00d96360, grpc.internal.security_connector=0x55fb00d226e0, grpc.internal.subchannel_pool=0x55fb00e90cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55fb00c5f5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:28.973069811+02:00"}), backing off for 999 ms 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@130 -- # devid1=nvmf-tcp:nqn.2016-06.io.spdk:cnode1 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # rpc_cmd bdev_get_bdevs -b null0 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # jq -r '.[].uuid' 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@131 -- # uuid=304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@134 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:58.472 00:20:29 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python 00:12:58.729 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:58.729 I0000 00:00:1728426029.292222 2089250 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:58.729 I0000 00:00:1728426029.293723 2089250 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:58.729 I0000 00:00:1728426029.295467 2089260 subchannel.cc:806] subchannel 0x564f51580220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x564f51491670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x564f5151fcc0, grpc.internal.client_channel_call_destination=0x7fd9b0fe7390, grpc.internal.event_engine=0x564f5142f190, grpc.internal.security_connector=0x564f514396e0, grpc.internal.subchannel_pool=0x564f515a7cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x564f513765c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:29.294445019+02:00"}), backing off for 1000 ms 00:12:58.729 {} 00:12:58.729 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:58.729 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # jq -r '.[0].namespaces | length' 00:12:58.730 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@135 -- # [[ 1 -eq 1 ]] 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # jq -r '.[0].namespaces | length' 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@136 -- # [[ 0 -eq 0 ]] 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # jq -r '.[0].namespaces[0].uuid' 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@137 -- # [[ 304a5e2e-92bb-43bb-90ee-614c2276a97b == \3\0\4\a\5\e\2\e\-\9\2\b\b\-\4\3\b\b\-\9\0\e\e\-\6\1\4\c\2\2\7\6\a\9\7\b ]] 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@140 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@45 -- # uuid2base64 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:58.987 00:20:29 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python 00:12:59.245 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:59.245 I0000 00:00:1728426029.727509 2089300 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:59.245 I0000 00:00:1728426029.728907 2089300 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:59.245 I0000 00:00:1728426029.730674 2089440 subchannel.cc:806] subchannel 0x56184eca7220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56184ebb8670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56184ec46cc0, grpc.internal.client_channel_call_destination=0x7f056f272390, grpc.internal.event_engine=0x56184eb56190, grpc.internal.security_connector=0x56184eb606e0, grpc.internal.subchannel_pool=0x56184eccecc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56184ea9d5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:29.729657658+02:00"}), backing off for 1000 ms 00:12:59.245 {} 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # jq -r '.[0].namespaces | length' 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@141 -- # [[ 1 -eq 1 ]] 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # jq -r '.[0].namespaces | length' 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@142 -- # [[ 0 -eq 0 ]] 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # jq -r '.[0].namespaces[0].uuid' 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.245 00:20:29 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.503 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@143 -- # [[ 304a5e2e-92bb-43bb-90ee-614c2276a97b == \3\0\4\a\5\e\2\e\-\9\2\b\b\-\4\3\b\b\-\9\0\e\e\-\6\1\4\c\2\2\7\6\a\9\7\b ]] 00:12:59.503 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@146 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:59.503 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:59.503 00:20:29 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:59.503 00:20:29 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python 00:12:59.503 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:12:59.503 I0000 00:00:1728426030.126814 2089531 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:12:59.503 I0000 00:00:1728426030.128350 2089531 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:12:59.503 I0000 00:00:1728426030.130057 2089535 subchannel.cc:806] subchannel 0x559568c23220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559568b34670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559568bc2cc0, grpc.internal.client_channel_call_destination=0x7f496ef02390, grpc.internal.event_engine=0x559568b50360, grpc.internal.security_connector=0x559568adc6e0, grpc.internal.subchannel_pool=0x559568c4acc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559568a195c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:30.129053565+02:00"}), backing off for 999 ms 00:12:59.761 {} 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # jq -r '.[0].namespaces | length' 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@147 -- # [[ 0 -eq 0 ]] 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode1 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # jq -r '.[0].namespaces | length' 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@148 -- # [[ 0 -eq 0 ]] 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@151 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@59 -- # uuid2base64 304a5e2e-92bb-43bb-90ee-614c2276a97b 00:12:59.761 00:20:30 sma.sma_nvmf_tcp -- sma/common.sh@20 -- # python 00:13:00.020 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:00.020 I0000 00:00:1728426030.496637 2089571 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:00.020 I0000 00:00:1728426030.498004 2089571 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:00.020 I0000 00:00:1728426030.499703 2089574 subchannel.cc:806] subchannel 0x558f26ed4220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558f26de5670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558f26e73cc0, grpc.internal.client_channel_call_destination=0x7fe8a3292390, grpc.internal.event_engine=0x558f26e01360, grpc.internal.security_connector=0x558f26d8d6e0, grpc.internal.subchannel_pool=0x558f26efbcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558f26cca5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:20:30.498682717+02:00"}), backing off for 1000 ms 00:13:00.020 {} 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@153 -- # cleanup 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@13 -- # killprocess 2088290 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2088290 ']' 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2088290 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2088290 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2088290' 00:13:00.020 killing process with pid 2088290 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2088290 00:13:00.020 00:20:30 sma.sma_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2088290 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@14 -- # killprocess 2088291 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2088291 ']' 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2088291 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2088291 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=python3 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2088291' 00:13:02.547 killing process with pid 2088291 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2088291 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2088291 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- sma/nvmf_tcp.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:13:02.547 00:13:02.547 real 0m8.912s 00:13:02.547 user 0m12.127s 00:13:02.547 sys 0m1.240s 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:02.547 00:20:33 sma.sma_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.547 ************************************ 00:13:02.547 END TEST sma_nvmf_tcp 00:13:02.547 ************************************ 00:13:02.805 00:20:33 sma -- sma/sma.sh@12 -- # run_test sma_vfiouser_qemu /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh 00:13:02.805 00:20:33 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:02.805 00:20:33 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:02.805 00:20:33 sma -- common/autotest_common.sh@10 -- # set +x 00:13:02.805 ************************************ 00:13:02.805 START TEST sma_vfiouser_qemu 00:13:02.805 ************************************ 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh 00:13:02.805 * Looking for test storage... 00:13:02.805 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1681 -- # lcov --version 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@336 -- # read -ra ver1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # IFS=.-: 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@337 -- # read -ra ver2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@338 -- # local 'op=<' 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@340 -- # ver1_l=2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@341 -- # ver2_l=1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@344 -- # case "$op" in 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@345 -- # : 1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # decimal 1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@365 -- # ver1[v]=1 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # decimal 2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@353 -- # local d=2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@355 -- # echo 2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@366 -- # ver2[v]=2 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- scripts/common.sh@368 -- # return 0 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.805 --rc genhtml_branch_coverage=1 00:13:02.805 --rc genhtml_function_coverage=1 00:13:02.805 --rc genhtml_legend=1 00:13:02.805 --rc geninfo_all_blocks=1 00:13:02.805 --rc geninfo_unexecuted_blocks=1 00:13:02.805 00:13:02.805 ' 00:13:02.805 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.805 --rc genhtml_branch_coverage=1 00:13:02.805 --rc genhtml_function_coverage=1 00:13:02.805 --rc genhtml_legend=1 00:13:02.805 --rc geninfo_all_blocks=1 00:13:02.805 --rc geninfo_unexecuted_blocks=1 00:13:02.806 00:13:02.806 ' 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.806 --rc genhtml_branch_coverage=1 00:13:02.806 --rc genhtml_function_coverage=1 00:13:02.806 --rc genhtml_legend=1 00:13:02.806 --rc geninfo_all_blocks=1 00:13:02.806 --rc geninfo_unexecuted_blocks=1 00:13:02.806 00:13:02.806 ' 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.806 --rc genhtml_branch_coverage=1 00:13:02.806 --rc genhtml_function_coverage=1 00:13:02.806 --rc genhtml_legend=1 00:13:02.806 --rc geninfo_all_blocks=1 00:13:02.806 --rc geninfo_unexecuted_blocks=1 00:13:02.806 00:13:02.806 ' 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vfio_user/common.sh 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vfio_user/common.sh@6 -- # : 128 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vfio_user/common.sh@7 -- # : 512 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vfio_user/common.sh@9 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@6 -- # : false 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@7 -- # : /root/vhost_test 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@9 -- # : qemu-img 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vfiouser_qemu.sh 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@2 -- # vhost_0_main_core=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:13:02.806 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # check_cgroup 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@10 -- # echo 2 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vfio_user/common.sh@11 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vfio_user/common.sh@14 -- # [[ ! -e /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vfio_user/common.sh@19 -- # QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@104 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@107 -- # VM_PASSWORD=root 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@108 -- # vm_no=0 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@110 -- # VFO_ROOT_PATH=/tmp/sma/vfio-user/qemu 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@112 -- # '[' -e /tmp/sma/vfio-user/qemu ']' 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@113 -- # mkdir -p /tmp/sma/vfio-user/qemu 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@116 -- # used_vms=0 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@117 -- # vm_kill_all 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # local vm 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@470 -- # vm_list_all 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@459 -- # vms=() 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@459 -- # local vms 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/1 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@471 -- # vm_kill 1 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@435 -- # vm_num_is_valid 1 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/1 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/1/qemu.pid ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@439 -- # return 0 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@474 -- # rm -rf /root/vhost_test/vms 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@119 -- # vm_setup --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 --disk-type=virtio --force=0 '--qemu-args=-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@511 -- # xtrace_disable 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:03.064 INFO: Creating new VM in /root/vhost_test/vms/0 00:13:03.064 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:13:03.064 INFO: TASK MASK: 1-2 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@664 -- # local node_num=0 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@665 -- # local boot_disk_present=false 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:13:03.064 INFO: NUMA NODE: 0 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@670 -- # [[ -n '' ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@679 -- # [[ -z '' ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # (( 0 == 0 )) 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@684 -- # [[ virtio == virtio* ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@685 -- # disks=("default_virtio.img") 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@694 -- # IFS=, 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@695 -- # [[ -z '' ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@695 -- # disk_type=virtio 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@697 -- # case $disk_type in 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@699 -- # local raw_name=RAWSCSI 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@700 -- # local raw_disk=/root/vhost_test/vms/0/test.img 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@703 -- # [[ -f default_virtio.img ]] 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@707 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img' 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img' 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img' 00:13:03.064 INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img 00:13:03.064 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@708 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024 00:13:03.630 1024+0 records in 00:13:03.630 1024+0 records out 00:13:03.630 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.416483 s, 2.6 GB/s 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@711 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number") 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@712 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name") 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@713 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache") 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@773 -- # [[ -n '' ]] 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@778 -- # (( 1 )) 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@778 -- # cmd+=("${qemu_args[@]}") 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh' 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh' 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh' 00:13:03.630 INFO: Saving to /root/vhost_test/vms/0/run.sh 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # cat 00:13:03.630 00:20:33 sma.sma_vfiouser_qemu -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:10005,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/0/run.sh 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@820 -- # echo 10000 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@821 -- # echo 10001 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@822 -- # echo 10002 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/0/migration_port 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@825 -- # [[ -z '' ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@827 -- # echo 10004 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@828 -- # echo 100 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@830 -- # [[ -z '' ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@831 -- # [[ -z '' ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@124 -- # vm_run 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@836 -- # local run_all=false 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@837 -- # local vms_to_run= 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@839 -- # getopts a-: optchar 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@849 -- # false 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@852 -- # shift 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@853 -- # for vm in "$@" 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@854 -- # vm_num_is_valid 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@859 -- # vms_to_run+=' 0' 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@864 -- # vm_is_running 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@362 -- # vm_num_is_valid 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/0 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@366 -- # return 1 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/0/run.sh' 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh' 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh' 00:13:03.630 INFO: running /root/vhost_test/vms/0/run.sh 00:13:03.630 00:20:34 sma.sma_vfiouser_qemu -- vhost/common.sh@870 -- # /root/vhost_test/vms/0/run.sh 00:13:03.630 Running VM in /root/vhost_test/vms/0 00:13:03.630 Waiting for QEMU pid file 00:13:05.004 === qemu.log === 00:13:05.004 === qemu.log === 00:13:05.004 00:20:35 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@125 -- # vm_wait_for_boot 300 0 00:13:05.004 00:20:35 sma.sma_vfiouser_qemu -- vhost/common.sh@906 -- # assert_number 300 00:13:05.004 00:20:35 sma.sma_vfiouser_qemu -- vhost/common.sh@274 -- # [[ 300 =~ [0-9]+ ]] 00:13:05.004 00:20:35 sma.sma_vfiouser_qemu -- vhost/common.sh@274 -- # return 0 00:13:05.004 00:20:35 sma.sma_vfiouser_qemu -- vhost/common.sh@908 -- # xtrace_disable 00:13:05.004 00:20:35 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:05.004 INFO: Waiting for VMs to boot 00:13:05.004 INFO: waiting for VM0 (/root/vhost_test/vms/0) 00:13:27.066 00:13:27.066 INFO: VM0 ready 00:13:27.066 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:27.067 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:27.067 INFO: all VMs ready 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- vhost/common.sh@966 -- # return 0 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@129 -- # tgtpid=2094233 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@130 -- # waitforlisten 2094233 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@128 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@831 -- # '[' -z 2094233 ']' 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.067 00:20:57 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:27.067 [2024-10-09 00:20:57.575332] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:13:27.067 [2024-10-09 00:20:57.575417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094233 ] 00:13:27.067 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.067 [2024-10-09 00:20:57.678739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.335 [2024-10-09 00:20:57.873568] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@864 -- # return 0 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@133 -- # rpc_cmd dpdk_cryptodev_scan_accel_module 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@134 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:27.902 [2024-10-09 00:20:58.363370] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@135 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:27.902 [2024-10-09 00:20:58.371384] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@136 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:27.902 [2024-10-09 00:20:58.379412] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@137 -- # rpc_cmd framework_start_init 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.902 00:20:58 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:28.161 [2024-10-09 00:20:58.628642] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@140 -- # rpc_cmd bdev_null_create null0 100 4096 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:28.727 null0 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@141 -- # rpc_cmd bdev_null_create null1 100 4096 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:28.727 null1 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@160 -- # smapid=2094481 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@163 -- # sma_waitforlisten 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/common.sh@8 -- # local sma_port=8080 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i = 0 )) 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@144 -- # cat 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 )) 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:28.727 00:20:59 sma.sma_vfiouser_qemu -- sma/common.sh@14 -- # sleep 1s 00:13:28.984 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:28.984 I0000 00:00:1728426059.452450 2094481 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:29.915 00:21:00 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i++ )) 00:13:29.915 00:21:00 sma.sma_vfiouser_qemu -- sma/common.sh@10 -- # (( i < 5 )) 00:13:29.915 00:21:00 sma.sma_vfiouser_qemu -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:29.915 00:21:00 sma.sma_vfiouser_qemu -- sma/common.sh@12 -- # return 0 00:13:29.915 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@166 -- # rpc_cmd nvmf_get_transports --trtype VFIOUSER 00:13:29.915 00:21:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:29.916 [ 00:13:29.916 { 00:13:29.916 "trtype": "VFIOUSER", 00:13:29.916 "max_queue_depth": 256, 00:13:29.916 "max_io_qpairs_per_ctrlr": 127, 00:13:29.916 "in_capsule_data_size": 0, 00:13:29.916 "max_io_size": 131072, 00:13:29.916 "io_unit_size": 131072, 00:13:29.916 "max_aq_depth": 32, 00:13:29.916 "num_shared_buffers": 0, 00:13:29.916 "buf_cache_size": 0, 00:13:29.916 "dif_insert_or_strip": false, 00:13:29.916 "zcopy": false, 00:13:29.916 "abort_timeout_sec": 0, 00:13:29.916 "ack_timeout": 0, 00:13:29.916 "data_wr_pool_size": 0 00:13:29.916 } 00:13:29.916 ] 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@169 -- # vm_exec 0 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]' 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 '[[ ! -e /sys/class/nvme-subsystem/nvme-subsys0 ]]' 00:13:29.916 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # create_device 0 0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # jq -r .handle 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:29.916 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:30.173 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:30.173 I0000 00:00:1728426060.682858 2094776 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:30.173 I0000 00:00:1728426060.684307 2094776 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:30.173 [2024-10-09 00:21:00.692886] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@172 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@173 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:30.433 [ 00:13:30.433 { 00:13:30.433 "nqn": "nqn.2016-06.io.spdk:vfiouser-0", 00:13:30.433 "subtype": "NVMe", 00:13:30.433 "listen_addresses": [ 00:13:30.433 { 00:13:30.433 "trtype": "VFIOUSER", 00:13:30.433 "adrfam": "IPv4", 00:13:30.433 "traddr": "/var/tmp/vfiouser-0", 00:13:30.433 "trsvcid": "" 00:13:30.433 } 00:13:30.433 ], 00:13:30.433 "allow_any_host": true, 00:13:30.433 "hosts": [], 00:13:30.433 "serial_number": "00000000000000000000", 00:13:30.433 "model_number": "SPDK bdev Controller", 00:13:30.433 "max_namespaces": 32, 00:13:30.433 "min_cntlid": 1, 00:13:30.433 "max_cntlid": 65519, 00:13:30.433 "namespaces": [] 00:13:30.433 } 00:13:30.433 ] 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@174 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-0 00:13:30.433 00:21:00 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1 00:13:30.433 [2024-10-09 00:21:00.986156] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:31.370 00:21:01 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:31.370 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]] 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # rpc_cmd nvmf_get_subsystems 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # jq -r '. | length' 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@177 -- # [[ 2 -eq 2 ]] 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # create_device 1 0 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # jq -r .handle 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:31.628 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:31.886 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:31.886 I0000 00:00:1728426062.278186 2095129 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:31.886 I0000 00:00:1728426062.279817 2095129 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:31.886 [2024-10-09 00:21:02.285635] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@179 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@180 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:31.886 [ 00:13:31.886 { 00:13:31.886 "nqn": "nqn.2016-06.io.spdk:vfiouser-0", 00:13:31.886 "subtype": "NVMe", 00:13:31.886 "listen_addresses": [ 00:13:31.886 { 00:13:31.886 "trtype": "VFIOUSER", 00:13:31.886 "adrfam": "IPv4", 00:13:31.886 "traddr": "/var/tmp/vfiouser-0", 00:13:31.886 "trsvcid": "" 00:13:31.886 } 00:13:31.886 ], 00:13:31.886 "allow_any_host": true, 00:13:31.886 "hosts": [], 00:13:31.886 "serial_number": "00000000000000000000", 00:13:31.886 "model_number": "SPDK bdev Controller", 00:13:31.886 "max_namespaces": 32, 00:13:31.886 "min_cntlid": 1, 00:13:31.886 "max_cntlid": 65519, 00:13:31.886 "namespaces": [] 00:13:31.886 } 00:13:31.886 ] 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@181 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:31.886 [ 00:13:31.886 { 00:13:31.886 "nqn": "nqn.2016-06.io.spdk:vfiouser-1", 00:13:31.886 "subtype": "NVMe", 00:13:31.886 "listen_addresses": [ 00:13:31.886 { 00:13:31.886 "trtype": "VFIOUSER", 00:13:31.886 "adrfam": "IPv4", 00:13:31.886 "traddr": "/var/tmp/vfiouser-1", 00:13:31.886 "trsvcid": "" 00:13:31.886 } 00:13:31.886 ], 00:13:31.886 "allow_any_host": true, 00:13:31.886 "hosts": [], 00:13:31.886 "serial_number": "00000000000000000000", 00:13:31.886 "model_number": "SPDK bdev Controller", 00:13:31.886 "max_namespaces": 32, 00:13:31.886 "min_cntlid": 1, 00:13:31.886 "max_cntlid": 65519, 00:13:31.886 "namespaces": [] 00:13:31.886 } 00:13:31.886 ] 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@182 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 != \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]] 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@183 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-1 00:13:31.886 00:21:02 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1 00:13:32.143 [2024-10-09 00:21:02.536169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:33.077 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme1/subsysnqn 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme1/subsysnqn ]] 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # rpc_cmd nvmf_get_subsystems 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # jq -r '. | length' 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@186 -- # [[ 3 -eq 3 ]] 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # create_device 0 0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # jq -r .handle 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:33.077 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:33.335 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:33.335 I0000 00:00:1728426063.860861 2095387 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:33.335 I0000 00:00:1728426063.862354 2095387 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:33.335 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@190 -- # tmp0=nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:33.335 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # create_device 1 0 00:13:33.335 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # jq -r .handle 00:13:33.335 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1 00:13:33.335 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:33.335 00:21:03 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:33.592 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:33.592 I0000 00:00:1728426064.113432 2095411 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:33.592 I0000 00:00:1728426064.114980 2095411 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@191 -- # tmp1=nvme:nqn.2016-06.io.spdk:vfiouser-1 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # vm_count_nvme 0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true' 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:33.592 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true' 00:13:33.592 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@193 -- # [[ 2 -eq 2 ]] 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # rpc_cmd nvmf_get_subsystems 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # jq -r '. | length' 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@195 -- # [[ 3 -eq 3 ]] 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@196 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-0 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\0 ]] 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@197 -- # [[ nvme:nqn.2016-06.io.spdk:vfiouser-1 == \n\v\m\e\:\n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\v\f\i\o\u\s\e\r\-\1 ]] 00:13:33.849 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@200 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:33.850 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:34.108 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:34.108 I0000 00:00:1728426064.550817 2095632 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:34.108 I0000 00:00:1728426064.552426 2095632 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:34.108 {} 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@201 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:34.108 [2024-10-09 00:21:04.608892] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:34.108 request: 00:13:34.108 { 00:13:34.108 "nqn": "nqn.2016-06.io.spdk:vfiouser-0", 00:13:34.108 "method": "nvmf_get_subsystems", 00:13:34.108 "req_id": 1 00:13:34.108 } 00:13:34.108 Got JSON-RPC error response 00:13:34.108 response: 00:13:34.108 { 00:13:34.108 "code": -19, 00:13:34.108 "message": "No such device" 00:13:34.108 } 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@202 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:34.108 [ 00:13:34.108 { 00:13:34.108 "nqn": "nqn.2016-06.io.spdk:vfiouser-1", 00:13:34.108 "subtype": "NVMe", 00:13:34.108 "listen_addresses": [ 00:13:34.108 { 00:13:34.108 "trtype": "VFIOUSER", 00:13:34.108 "adrfam": "IPv4", 00:13:34.108 "traddr": "/var/tmp/vfiouser-1", 00:13:34.108 "trsvcid": "" 00:13:34.108 } 00:13:34.108 ], 00:13:34.108 "allow_any_host": true, 00:13:34.108 "hosts": [], 00:13:34.108 "serial_number": "00000000000000000000", 00:13:34.108 "model_number": "SPDK bdev Controller", 00:13:34.108 "max_namespaces": 32, 00:13:34.108 "min_cntlid": 1, 00:13:34.108 "max_cntlid": 65519, 00:13:34.108 "namespaces": [] 00:13:34.108 } 00:13:34.108 ] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # rpc_cmd nvmf_get_subsystems 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # jq -r '. | length' 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@203 -- # [[ 2 -eq 2 ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # vm_count_nvme 0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true' 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:34.108 00:21:04 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true' 00:13:34.108 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:34.366 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@204 -- # [[ 1 -eq 1 ]] 00:13:34.366 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@206 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1 00:13:34.366 00:21:04 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:34.624 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:34.624 I0000 00:00:1728426065.032034 2095697 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:34.624 I0000 00:00:1728426065.033681 2095697 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:34.624 {} 00:13:34.624 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@207 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:34.624 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:34.625 [2024-10-09 00:21:05.098315] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:34.625 request: 00:13:34.625 { 00:13:34.625 "nqn": "nqn.2016-06.io.spdk:vfiouser-0", 00:13:34.625 "method": "nvmf_get_subsystems", 00:13:34.625 "req_id": 1 00:13:34.625 } 00:13:34.625 Got JSON-RPC error response 00:13:34.625 response: 00:13:34.625 { 00:13:34.625 "code": -19, 00:13:34.625 "message": "No such device" 00:13:34.625 } 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@208 -- # NOT rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:34.625 [2024-10-09 00:21:05.114365] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist 00:13:34.625 request: 00:13:34.625 { 00:13:34.625 "nqn": "nqn.2016-06.io.spdk:vfiouser-1", 00:13:34.625 "method": "nvmf_get_subsystems", 00:13:34.625 "req_id": 1 00:13:34.625 } 00:13:34.625 Got JSON-RPC error response 00:13:34.625 response: 00:13:34.625 { 00:13:34.625 "code": -19, 00:13:34.625 "message": "No such device" 00:13:34.625 } 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # rpc_cmd nvmf_get_subsystems 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # jq -r '. | length' 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@209 -- # [[ 1 -eq 1 ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # vm_count_nvme 0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # vm_exec 0 'grep -sl SPDK /sys/class/nvme/*/model || true' 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@68 -- # wc -l 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:34.625 00:21:05 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -sl SPDK /sys/class/nvme/*/model || true' 00:13:34.625 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:34.883 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@210 -- # [[ 0 -eq 0 ]] 00:13:34.883 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@213 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:34.883 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:35.144 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:35.144 I0000 00:00:1728426065.532430 2095737 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:35.144 I0000 00:00:1728426065.533843 2095737 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:35.144 [2024-10-09 00:21:05.539577] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:35.144 {} 00:13:35.144 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@214 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1 00:13:35.144 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:35.144 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:35.144 I0000 00:00:1728426065.771544 2095834 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:35.144 I0000 00:00:1728426065.773092 2095834 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:35.144 [2024-10-09 00:21:05.776260] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist 00:13:35.401 {} 00:13:35.401 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # create_device 0 0 00:13:35.401 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # jq -r .handle 00:13:35.401 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0 00:13:35.401 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:35.401 00:21:05 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:35.401 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:35.401 I0000 00:00:1728426066.004868 2095993 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:35.401 I0000 00:00:1728426066.006231 2095993 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:35.401 [2024-10-09 00:21:06.008867] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:35.659 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@217 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:35.659 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # create_device 1 0 00:13:35.659 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # jq -r .handle 00:13:35.659 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=1 00:13:35.659 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:35.659 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:35.659 [2024-10-09 00:21:06.262308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller 00:13:35.930 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:35.930 I0000 00:00:1728426066.359682 2096027 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:35.930 I0000 00:00:1728426066.360980 2096027 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:35.930 [2024-10-09 00:21:06.365906] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-1' does not exist 00:13:35.930 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@218 -- # device1=nvme:nqn.2016-06.io.spdk:vfiouser-1 00:13:35.930 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # rpc_cmd bdev_get_bdevs -b null0 00:13:35.930 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # jq -r '.[].uuid' 00:13:35.930 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.930 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:35.930 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@219 -- # uuid0=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # rpc_cmd bdev_get_bdevs -b null1 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # jq -r '.[].uuid' 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@220 -- # uuid1=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@223 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:36.187 [2024-10-09 00:21:06.618500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-1: enabling controller 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:36.187 00:21:06 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:36.445 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:36.445 I0000 00:00:1728426066.838033 2096055 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:36.445 I0000 00:00:1728426066.839453 2096055 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:36.445 {} 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # jq -r '.[0].namespaces | length' 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@224 -- # [[ 1 -eq 1 ]] 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # jq -r '.[0].namespaces | length' 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:36.445 00:21:06 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@225 -- # [[ 0 -eq 0 ]] 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # jq -r '.[0].namespaces[0].uuid' 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@226 -- # [[ 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e == \5\a\3\b\6\3\e\f\-\3\5\c\c\-\4\a\b\4\-\b\b\a\7\-\d\9\c\4\2\e\a\0\9\9\1\e ]] 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@227 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:36.445 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:36.703 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]] 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:36.703 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:36.703 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:36.961 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid 00:13:36.961 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]] 00:13:36.961 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@229 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:36.961 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:36.961 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:36.961 00:21:07 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:37.220 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:37.220 I0000 00:00:1728426067.608817 2096318 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:37.220 I0000 00:00:1728426067.610614 2096318 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:37.220 {} 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # jq -r '.[0].namespaces | length' 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@230 -- # [[ 1 -eq 1 ]] 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # jq -r '.[0].namespaces | length' 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@231 -- # [[ 1 -eq 1 ]] 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # jq -r '.[0].namespaces[0].uuid' 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@232 -- # [[ 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e == \5\a\3\b\6\3\e\f\-\3\5\c\c\-\4\a\b\4\-\b\b\a\7\-\d\9\c\4\2\e\a\0\9\9\1\e ]] 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # jq -r '.[0].namespaces[0].uuid' 00:13:37.220 00:21:07 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@233 -- # [[ 25704b3d-e1ac-49f6-8fa8-4f227a917b74 == \2\5\7\0\4\b\3\d\-\e\1\a\c\-\4\9\f\6\-\8\f\a\8\-\4\f\2\2\7\a\9\1\7\b\7\4 ]] 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@234 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:37.478 00:21:07 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:37.478 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:37.478 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1 00:13:37.478 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]] 00:13:37.478 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:37.479 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:37.479 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:37.736 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid 00:13:37.736 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]] 00:13:37.736 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@237 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:37.736 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:37.736 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:37.736 00:21:08 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:37.994 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:37.994 I0000 00:00:1728426068.440212 2096386 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:37.994 I0000 00:00:1728426068.441789 2096386 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:37.994 {} 00:13:37.994 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@238 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:37.994 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:37.994 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:37.994 00:21:08 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:38.258 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:38.258 I0000 00:00:1728426068.738749 2096629 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:38.258 I0000 00:00:1728426068.740358 2096629 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:38.258 {} 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # jq -r '.[0].namespaces | length' 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@239 -- # [[ 1 -eq 1 ]] 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # jq -r '.[0].namespaces | length' 00:13:38.258 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@240 -- # [[ 1 -eq 1 ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # jq -r '.[0].namespaces[0].uuid' 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@241 -- # [[ 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e == \5\a\3\b\6\3\e\f\-\3\5\c\c\-\4\a\b\4\-\b\b\a\7\-\d\9\c\4\2\e\a\0\9\9\1\e ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # jq -r '.[0].namespaces[0].uuid' 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@242 -- # [[ 25704b3d-e1ac-49f6-8fa8-4f227a917b74 == \2\5\7\0\4\b\3\d\-\e\1\a\c\-\4\9\f\6\-\8\f\a\8\-\4\f\2\2\7\a\9\1\7\b\7\4 ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@243 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:38.517 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:38.518 00:21:08 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:38.518 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]] 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:38.518 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:38.776 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]] 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@244 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_volume 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_volume 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:38.776 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:38.776 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:39.033 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0 00:13:39.033 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]] 00:13:39.033 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme0/nvme*/uuid' 00:13:39.033 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme0/nvme*/uuid' 00:13:39.034 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid= 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]] 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@245 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:39.034 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:39.292 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]] 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:39.292 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:39.292 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]] 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@246 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_volume 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_volume 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:39.550 00:21:09 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:39.550 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]] 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme1/nvme*/uuid' 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:39.550 00:21:10 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme1/nvme*/uuid' 00:13:39.550 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid= 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]] 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@249 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:39.808 00:21:10 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:40.065 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:40.065 I0000 00:00:1728426070.507439 2097299 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:40.065 I0000 00:00:1728426070.509090 2097299 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:40.065 {} 00:13:40.065 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@250 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:40.065 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:40.065 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:40.065 00:21:10 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:40.323 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:40.323 I0000 00:00:1728426070.810871 2097336 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:40.323 I0000 00:00:1728426070.812219 2097336 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:40.323 {} 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # jq -r '.[0].namespaces | length' 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@251 -- # [[ 1 -eq 1 ]] 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # jq -r '.[0].namespaces | length' 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:40.323 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.581 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@252 -- # [[ 1 -eq 1 ]] 00:13:40.581 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:40.581 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.581 00:21:10 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # jq -r '.[0].namespaces[0].uuid' 00:13:40.581 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:40.581 00:21:10 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@253 -- # [[ 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e == \5\a\3\b\6\3\e\f\-\3\5\c\c\-\4\a\b\4\-\b\b\a\7\-\d\9\c\4\2\e\a\0\9\9\1\e ]] 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # jq -r '.[0].namespaces[0].uuid' 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@254 -- # [[ 25704b3d-e1ac-49f6-8fa8-4f227a917b74 == \2\5\7\0\4\b\3\d\-\e\1\a\c\-\4\9\f\6\-\8\f\a\8\-\4\f\2\2\7\a\9\1\7\b\7\4 ]] 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@255 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:40.581 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:40.581 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]] 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:40.840 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme0/nvme0c0n1/uuid 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme0/nvme0c0n1/uuid ]] 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@256 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_volume 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_volume 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:40.840 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:40.840 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]] 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme0/nvme*/uuid' 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme0/nvme*/uuid' 00:13:41.105 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid= 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]] 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@257 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:41.105 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:41.106 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:41.366 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]] 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:41.366 00:21:11 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:41.366 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid=/sys/class/nvme/nvme1/nvme1c1n1/uuid 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z /sys/class/nvme/nvme1/nvme1c1n1/uuid ]] 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@258 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_volume 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_volume 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:41.624 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]] 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme1/nvme*/uuid' 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:41.624 00:21:12 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme1/nvme*/uuid' 00:13:41.882 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid= 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]] 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@261 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:41.882 00:21:12 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:42.140 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:42.140 I0000 00:00:1728426072.630757 2097684 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:42.140 I0000 00:00:1728426072.632428 2097684 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:42.140 {} 00:13:42.140 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@262 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:42.140 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:42.140 00:21:12 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:42.140 00:21:12 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:42.398 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:42.398 I0000 00:00:1728426072.932990 2097889 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:42.398 I0000 00:00:1728426072.934558 2097889 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:42.398 {} 00:13:42.398 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:42.398 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # jq -r '.[0].namespaces | length' 00:13:42.398 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.398 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:42.398 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@263 -- # [[ 0 -eq 0 ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-1 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # jq -r '.[0].namespaces | length' 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@264 -- # [[ 0 -eq 0 ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@265 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_volume 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_volume 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-0 /sys/class/nvme/*/subsysnqn' 00:13:42.656 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme0 ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:42.656 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e /sys/class/nvme/nvme0/nvme*/uuid' 00:13:42.914 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:42.914 grep: /sys/class/nvme/nvme0/nvme*/uuid: No such file or directory 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid= 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]] 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@266 -- # NOT vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:42.914 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_volume 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_volume 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_volume 0 nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@72 -- # local vm_id=0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@73 -- # local nqn=nqn.2016-06.io.spdk:vfiouser-1 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@74 -- # local uuid=25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # awk -F/ '{print $5}' 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:42.915 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-1 /sys/class/nvme/*/subsysnqn' 00:13:42.915 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@76 -- # nvme=nvme1 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@77 -- # [[ -z nvme1 ]] 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # vm_exec 0 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l 25704b3d-e1ac-49f6-8fa8-4f227a917b74 /sys/class/nvme/nvme1/nvme*/uuid' 00:13:43.173 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:43.173 grep: /sys/class/nvme/nvme1/nvme*/uuid: No such file or directory 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@82 -- # tmpuuid= 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@83 -- # [[ -z '' ]] 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@84 -- # return 1 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@269 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:43.173 00:21:13 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:43.430 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:43.430 I0000 00:00:1728426073.995427 2097972 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:43.430 I0000 00:00:1728426073.996882 2097972 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:43.430 {} 00:13:43.430 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@270 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:43.430 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:43.687 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:43.687 00:21:14 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:43.687 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:43.687 I0000 00:00:1728426074.293247 2098178 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:43.687 I0000 00:00:1728426074.294864 2098178 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:43.945 {} 00:13:43.945 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@271 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:43.945 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:43.945 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 25704b3d-e1ac-49f6-8fa8-4f227a917b74 00:13:43.945 00:21:14 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:43.945 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:43.945 I0000 00:00:1728426074.577390 2098224 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:43.945 I0000 00:00:1728426074.578986 2098224 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:44.202 {} 00:13:44.202 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@272 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-1 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:44.202 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:44.202 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:44.202 00:21:14 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:44.460 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:44.460 I0000 00:00:1728426074.872076 2098252 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:44.460 I0000 00:00:1728426074.873479 2098252 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:44.461 {} 00:13:44.461 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@274 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:44.461 00:21:14 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:44.717 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:44.717 I0000 00:00:1728426075.116646 2098275 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:44.717 I0000 00:00:1728426075.118077 2098275 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:44.717 {} 00:13:44.717 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@275 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-1 00:13:44.717 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:44.974 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:44.974 I0000 00:00:1728426075.374466 2098327 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:44.974 I0000 00:00:1728426075.376049 2098327 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:44.974 {} 00:13:44.974 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # create_device 42 0 00:13:44.974 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # jq -r .handle 00:13:44.974 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=42 00:13:44.974 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:44.974 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:45.232 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:45.232 I0000 00:00:1728426075.610423 2098525 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:45.232 I0000 00:00:1728426075.612006 2098525 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:45.232 [2024-10-09 00:21:15.614875] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-42' does not exist 00:13:45.232 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@278 -- # device3=nvme:nqn.2016-06.io.spdk:vfiouser-42 00:13:45.232 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@279 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42 00:13:45.232 00:21:15 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1 00:13:45.488 [2024-10-09 00:21:15.883314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-42: enabling controller 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn' 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn' 00:13:46.420 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn=/sys/class/nvme/nvme0/subsysnqn 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z /sys/class/nvme/nvme0/subsysnqn ]] 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@282 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-42 00:13:46.420 00:21:16 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:46.677 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:46.677 I0000 00:00:1728426077.137931 2098789 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:46.677 I0000 00:00:1728426077.139495 2098789 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:46.677 {} 00:13:46.677 00:21:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@283 -- # NOT vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42 00:13:46.677 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@650 -- # local es=0 00:13:46.677 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@652 -- # valid_exec_arg vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42 00:13:46.677 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@638 -- # local arg=vm_check_subsys_nqn 00:13:46.677 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.678 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # type -t vm_check_subsys_nqn 00:13:46.678 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.678 00:21:17 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # vm_check_subsys_nqn 0 nqn.2016-06.io.spdk:vfiouser-42 00:13:46.678 00:21:17 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@89 -- # sleep 1 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # vm_exec 0 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn' 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@331 -- # local vm_num=0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@332 -- # shift 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:13:47.610 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'grep -l nqn.2016-06.io.spdk:vfiouser-42 /sys/class/nvme/*/subsysnqn' 00:13:47.868 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:13:47.868 grep: /sys/class/nvme/*/subsysnqn: No such file or directory 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@90 -- # nqn= 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@91 -- # [[ -z '' ]] 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@92 -- # error 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42' 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@82 -- # echo =========== 00:13:47.868 =========== 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@83 -- # message ERROR 'FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42' 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=ERROR 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42' 00:13:47.868 ERROR: FAILED no NVMe on vm=0 with nqn=nqn.2016-06.io.spdk:vfiouser-42 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@84 -- # echo =========== 00:13:47.868 =========== 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- vhost/common.sh@86 -- # false 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@93 -- # return 1 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@653 -- # es=1 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@285 -- # key0=1234567890abcdef1234567890abcdef 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # create_device 0 0 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # jq -r .handle 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:47.868 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:48.126 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:48.126 I0000 00:00:1728426078.571643 2099042 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:48.126 I0000 00:00:1728426078.573047 2099042 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:48.126 [2024-10-09 00:21:18.579395] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:48.126 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@286 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:48.126 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # rpc_cmd bdev_get_bdevs -b null0 00:13:48.126 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # jq -r '.[].uuid' 00:13:48.126 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.126 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@287 -- # uuid0=5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:48.384 [2024-10-09 00:21:18.840806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # get_cipher AES_CBC 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/common.sh@27 -- # case "$1" in 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/common.sh@28 -- # echo 0 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@290 -- # format_key 1234567890abcdef1234567890abcdef 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:13:48.384 00:21:18 sma.sma_vfiouser_qemu -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:13:48.642 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:48.642 I0000 00:00:1728426079.027309 2099079 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:48.642 I0000 00:00:1728426079.028697 2099079 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:48.642 {} 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:vfiouser-0 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # jq -r '.[0].namespaces[0].name' 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@307 -- # ns_bdev=b54358ce-fd22-48a6-8353-8753a1a9174b 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # rpc_cmd bdev_get_bdevs -b b54358ce-fd22-48a6-8353-8753a1a9174b 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # jq -r '.[0].product_name' 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@308 -- # [[ crypto == \c\r\y\p\t\o ]] 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # rpc_cmd bdev_get_bdevs -b b54358ce-fd22-48a6-8353-8753a1a9174b 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # jq -r '.[] | select(.product_name == "crypto")' 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@309 -- # crypto_bdev='{ 00:13:48.642 "name": "b54358ce-fd22-48a6-8353-8753a1a9174b", 00:13:48.642 "aliases": [ 00:13:48.642 "63ddc178-6373-576f-93f9-3ef30b45188a" 00:13:48.642 ], 00:13:48.642 "product_name": "crypto", 00:13:48.642 "block_size": 4096, 00:13:48.642 "num_blocks": 25600, 00:13:48.642 "uuid": "63ddc178-6373-576f-93f9-3ef30b45188a", 00:13:48.642 "assigned_rate_limits": { 00:13:48.642 "rw_ios_per_sec": 0, 00:13:48.642 "rw_mbytes_per_sec": 0, 00:13:48.642 "r_mbytes_per_sec": 0, 00:13:48.642 "w_mbytes_per_sec": 0 00:13:48.642 }, 00:13:48.642 "claimed": true, 00:13:48.642 "claim_type": "exclusive_write", 00:13:48.642 "zoned": false, 00:13:48.642 "supported_io_types": { 00:13:48.642 "read": true, 00:13:48.642 "write": true, 00:13:48.642 "unmap": false, 00:13:48.642 "flush": false, 00:13:48.642 "reset": true, 00:13:48.642 "nvme_admin": false, 00:13:48.642 "nvme_io": false, 00:13:48.642 "nvme_io_md": false, 00:13:48.642 "write_zeroes": true, 00:13:48.642 "zcopy": false, 00:13:48.642 "get_zone_info": false, 00:13:48.642 "zone_management": false, 00:13:48.642 "zone_append": false, 00:13:48.642 "compare": false, 00:13:48.642 "compare_and_write": false, 00:13:48.642 "abort": false, 00:13:48.642 "seek_hole": false, 00:13:48.642 "seek_data": false, 00:13:48.642 "copy": false, 00:13:48.642 "nvme_iov_md": false 00:13:48.642 }, 00:13:48.642 "memory_domains": [ 00:13:48.642 { 00:13:48.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.642 "dma_device_type": 2 00:13:48.642 } 00:13:48.642 ], 00:13:48.642 "driver_specific": { 00:13:48.642 "crypto": { 00:13:48.642 "base_bdev_name": "null0", 00:13:48.642 "name": "b54358ce-fd22-48a6-8353-8753a1a9174b", 00:13:48.642 "key_name": "b54358ce-fd22-48a6-8353-8753a1a9174b_AES_CBC" 00:13:48.642 } 00:13:48.642 } 00:13:48.642 }' 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # rpc_cmd bdev_get_bdevs 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # jq -r '[.[] | select(.product_name == "crypto")] | length' 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:48.642 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@310 -- # [[ 1 -eq 1 ]] 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # jq -r .driver_specific.crypto.key_name 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@312 -- # key_name=b54358ce-fd22-48a6-8353-8753a1a9174b_AES_CBC 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # rpc_cmd accel_crypto_keys_get -k b54358ce-fd22-48a6-8353-8753a1a9174b_AES_CBC 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@313 -- # key_obj='[ 00:13:48.900 { 00:13:48.900 "name": "b54358ce-fd22-48a6-8353-8753a1a9174b_AES_CBC", 00:13:48.900 "cipher": "AES_CBC", 00:13:48.900 "key": "1234567890abcdef1234567890abcdef" 00:13:48.900 } 00:13:48.900 ]' 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # jq -r '.[0].key' 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@314 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]] 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # jq -r '.[0].cipher' 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@315 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]] 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@317 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:48.900 00:21:19 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:49.158 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:49.158 I0000 00:00:1728426079.650226 2099229 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:49.158 I0000 00:00:1728426079.651806 2099229 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:49.158 {} 00:13:49.158 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@318 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:49.158 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:49.416 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:49.416 I0000 00:00:1728426079.928814 2099365 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:49.416 I0000 00:00:1728426079.930430 2099365 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:49.416 {} 00:13:49.416 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # rpc_cmd bdev_get_bdevs 00:13:49.416 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r '.[] | select(.product_name == "crypto")' 00:13:49.416 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.416 00:21:19 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # jq -r length 00:13:49.416 00:21:19 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@319 -- # [[ '' -eq 0 ]] 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@322 -- # device_vfio_user=1 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # create_device 0 0 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # jq -r .handle 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@14 -- # local pfid=0 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@15 -- # local vfid=0 00:13:49.416 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@17 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:49.674 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:49.674 I0000 00:00:1728426080.213103 2099396 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:49.674 I0000 00:00:1728426080.214516 2099396 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:49.674 [2024-10-09 00:21:20.220821] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:vfiouser-0' does not exist 00:13:49.931 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@323 -- # device0=nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:49.931 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@324 -- # attach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:49.931 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:49.931 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@42 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:49.931 00:21:20 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:49.931 [2024-10-09 00:21:20.481269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/tmp/vfiouser-0: enabling controller 00:13:50.189 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:50.190 I0000 00:00:1728426080.609936 2099421 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:50.190 I0000 00:00:1728426080.611700 2099421 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:50.190 {} 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # diff /dev/fd/62 /dev/fd/61 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # get_qos_caps 1 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@327 -- # jq --sort-keys 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/common.sh@45 -- # local rootdir 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../.. 00:13:50.190 00:21:20 sma.sma_vfiouser_qemu -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py 00:13:50.448 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:50.448 I0000 00:00:1728426080.889680 2099488 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:50.448 I0000 00:00:1728426080.891099 2099488 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:50.448 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:50.448 00:21:20 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@340 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:50.448 00:21:20 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:50.705 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:50.705 I0000 00:00:1728426081.160648 2099658 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:50.705 I0000 00:00:1728426081.162113 2099658 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:50.705 {} 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # diff /dev/fd/62 /dev/fd/61 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # jq --sort-keys '.[].assigned_rate_limits' 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@359 -- # rpc_cmd bdev_get_bdevs -b null0 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@370 -- # detach_volume nvme:nqn.2016-06.io.spdk:vfiouser-0 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@56 -- # uuid2base64 5a3b63ef-35cc-4ab4-bba7-d9c42ea0991e 00:13:50.705 00:21:21 sma.sma_vfiouser_qemu -- sma/common.sh@20 -- # python 00:13:50.963 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:50.963 I0000 00:00:1728426081.498145 2099720 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:50.963 I0000 00:00:1728426081.499635 2099720 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:50.963 {} 00:13:50.963 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@371 -- # delete_device nvme:nqn.2016-06.io.spdk:vfiouser-0 00:13:50.963 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@31 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:51.307 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:51.307 I0000 00:00:1728426081.761450 2099749 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:51.307 I0000 00:00:1728426081.762783 2099749 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:51.307 {} 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@373 -- # cleanup 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@98 -- # vm_kill_all 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@469 -- # local vm 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@470 -- # vm_list_all 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@459 -- # vms=() 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@459 -- # local vms 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/0 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:13:51.307 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@471 -- # vm_kill 0 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@435 -- # vm_num_is_valid 0 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@302 -- # return 0 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/0 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@442 -- # local vm_pid 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # cat /root/vhost_test/vms/0/qemu.pid 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@443 -- # vm_pid=2090368 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@445 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=2090368)' 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=2090368)' 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=2090368)' 00:13:51.308 INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=2090368) 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@447 -- # /bin/kill 2090368 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@448 -- # notice 'process 2090368 killed' 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@94 -- # message INFO 'process 2090368 killed' 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@60 -- # local verbose_out 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@61 -- # false 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@62 -- # verbose_out= 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@69 -- # local msg_type=INFO 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@70 -- # shift 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@71 -- # echo -e 'INFO: process 2090368 killed' 00:13:51.308 INFO: process 2090368 killed 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@449 -- # rm -rf /root/vhost_test/vms/0 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- vhost/common.sh@474 -- # rm -rf /root/vhost_test/vms 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@99 -- # killprocess 2094233 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@950 -- # '[' -z 2094233 ']' 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # kill -0 2094233 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@955 -- # uname 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.308 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094233 00:13:51.601 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.601 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.601 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094233' 00:13:51.601 killing process with pid 2094233 00:13:51.601 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@969 -- # kill 2094233 00:13:51.601 00:21:21 sma.sma_vfiouser_qemu -- common/autotest_common.sh@974 -- # wait 2094233 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@100 -- # killprocess 2094481 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@950 -- # '[' -z 2094481 ']' 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@954 -- # kill -0 2094481 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@955 -- # uname 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2094481 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@956 -- # process_name=python3 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2094481' 00:13:53.503 killing process with pid 2094481 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@969 -- # kill 2094481 00:13:53.503 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@974 -- # wait 2094481 00:13:53.761 00:21:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # '[' -e /tmp/sma/vfio-user/qemu ']' 00:13:53.761 00:21:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@101 -- # rm -rf /tmp/sma/vfio-user/qemu 00:13:53.761 00:21:24 sma.sma_vfiouser_qemu -- sma/vfiouser_qemu.sh@374 -- # trap - SIGINT SIGTERM EXIT 00:13:53.761 00:13:53.761 real 0m50.909s 00:13:53.761 user 0m37.200s 00:13:53.761 sys 0m3.747s 00:13:53.761 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.761 00:21:24 sma.sma_vfiouser_qemu -- common/autotest_common.sh@10 -- # set +x 00:13:53.761 ************************************ 00:13:53.761 END TEST sma_vfiouser_qemu 00:13:53.761 ************************************ 00:13:53.761 00:21:24 sma -- sma/sma.sh@13 -- # run_test sma_plugins /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh 00:13:53.761 00:21:24 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:53.761 00:21:24 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.761 00:21:24 sma -- common/autotest_common.sh@10 -- # set +x 00:13:53.761 ************************************ 00:13:53.761 START TEST sma_plugins 00:13:53.761 ************************************ 00:13:53.761 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins.sh 00:13:53.761 * Looking for test storage... 00:13:53.761 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:13:53.761 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:53.761 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1681 -- # lcov --version 00:13:53.761 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:53.761 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@344 -- # case "$op" in 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@345 -- # : 1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@365 -- # decimal 1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@353 -- # local d=1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@355 -- # echo 1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@366 -- # decimal 2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@353 -- # local d=2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@355 -- # echo 2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.762 00:21:24 sma.sma_plugins -- scripts/common.sh@368 -- # return 0 00:13:53.762 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.762 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:53.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.762 --rc genhtml_branch_coverage=1 00:13:53.762 --rc genhtml_function_coverage=1 00:13:53.762 --rc genhtml_legend=1 00:13:53.762 --rc geninfo_all_blocks=1 00:13:53.762 --rc geninfo_unexecuted_blocks=1 00:13:53.762 00:13:53.762 ' 00:13:53.762 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:53.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.762 --rc genhtml_branch_coverage=1 00:13:53.762 --rc genhtml_function_coverage=1 00:13:53.762 --rc genhtml_legend=1 00:13:53.762 --rc geninfo_all_blocks=1 00:13:53.762 --rc geninfo_unexecuted_blocks=1 00:13:53.762 00:13:53.762 ' 00:13:53.762 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:53.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.762 --rc genhtml_branch_coverage=1 00:13:53.762 --rc genhtml_function_coverage=1 00:13:53.762 --rc genhtml_legend=1 00:13:53.762 --rc geninfo_all_blocks=1 00:13:53.762 --rc geninfo_unexecuted_blocks=1 00:13:53.762 00:13:53.762 ' 00:13:53.762 00:21:24 sma.sma_plugins -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:53.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.762 --rc genhtml_branch_coverage=1 00:13:53.762 --rc genhtml_function_coverage=1 00:13:53.762 --rc genhtml_legend=1 00:13:53.762 --rc geninfo_all_blocks=1 00:13:53.762 --rc geninfo_unexecuted_blocks=1 00:13:53.762 00:13:53.762 ' 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@28 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@31 -- # tgtpid=2100295 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@30 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@43 -- # smapid=2100296 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@45 -- # sma_waitforlisten 00:13:53.762 00:21:24 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@34 -- # cat 00:13:53.762 00:21:24 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@34 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:13:53.762 00:21:24 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:13:53.762 00:21:24 sma.sma_plugins -- sma/plugins.sh@34 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:13:53.762 00:21:24 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:53.762 00:21:24 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:54.021 00:21:24 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:13:54.021 [2024-10-09 00:21:24.472846] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:13:54.021 [2024-10-09 00:21:24.472935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100295 ] 00:13:54.021 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.021 [2024-10-09 00:21:24.577842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.280 [2024-10-09 00:21:24.761893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.848 00:21:25 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:13:54.848 00:21:25 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:54.848 00:21:25 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:54.848 00:21:25 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:13:55.106 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:55.106 I0000 00:00:1728426085.586691 2100296 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:56.042 00:21:26 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:13:56.042 00:21:26 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:56.042 00:21:26 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:56.042 00:21:26 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:13:56.042 00:21:26 sma.sma_plugins -- sma/plugins.sh@47 -- # create_device nvme 00:13:56.042 00:21:26 sma.sma_plugins -- sma/plugins.sh@47 -- # jq -r .handle 00:13:56.042 00:21:26 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:56.042 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:56.042 I0000 00:00:1728426086.673027 2100760 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:56.042 I0000 00:00:1728426086.674572 2100760 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:56.301 00:21:26 sma.sma_plugins -- sma/plugins.sh@47 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]] 00:13:56.301 00:21:26 sma.sma_plugins -- sma/plugins.sh@48 -- # create_device nvmf_tcp 00:13:56.301 00:21:26 sma.sma_plugins -- sma/plugins.sh@48 -- # jq -r .handle 00:13:56.301 00:21:26 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:56.301 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:56.301 I0000 00:00:1728426086.878712 2100806 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:56.301 I0000 00:00:1728426086.880205 2100806 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:56.301 00:21:26 sma.sma_plugins -- sma/plugins.sh@48 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:13:56.301 00:21:26 sma.sma_plugins -- sma/plugins.sh@50 -- # killprocess 2100296 00:13:56.560 00:21:26 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2100296 ']' 00:13:56.560 00:21:26 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2100296 00:13:56.560 00:21:26 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:13:56.560 00:21:26 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.560 00:21:26 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100296 00:13:56.561 00:21:26 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:13:56.561 00:21:26 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:13:56.561 00:21:26 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100296' 00:13:56.561 killing process with pid 2100296 00:13:56.561 00:21:26 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2100296 00:13:56.561 00:21:26 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2100296 00:13:56.561 00:21:27 sma.sma_plugins -- sma/plugins.sh@61 -- # smapid=2100832 00:13:56.561 00:21:27 sma.sma_plugins -- sma/plugins.sh@62 -- # sma_waitforlisten 00:13:56.561 00:21:27 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:13:56.561 00:21:27 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:13:56.561 00:21:27 sma.sma_plugins -- sma/plugins.sh@53 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:13:56.561 00:21:27 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:13:56.561 00:21:27 sma.sma_plugins -- sma/plugins.sh@53 -- # cat 00:13:56.561 00:21:27 sma.sma_plugins -- sma/plugins.sh@53 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:13:56.561 00:21:27 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:56.561 00:21:27 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:56.561 00:21:27 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:13:56.820 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:56.820 I0000 00:00:1728426087.218669 2100832 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:57.757 00:21:28 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:13:57.757 00:21:28 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:57.757 00:21:28 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:57.757 00:21:28 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:13:57.757 00:21:28 sma.sma_plugins -- sma/plugins.sh@64 -- # create_device nvmf_tcp 00:13:57.757 00:21:28 sma.sma_plugins -- sma/plugins.sh@64 -- # jq -r .handle 00:13:57.757 00:21:28 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:57.757 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:57.757 I0000 00:00:1728426088.285972 2101087 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:57.757 I0000 00:00:1728426088.287482 2101087 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:57.757 00:21:28 sma.sma_plugins -- sma/plugins.sh@64 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:13:57.757 00:21:28 sma.sma_plugins -- sma/plugins.sh@65 -- # NOT create_device nvme 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@650 -- # local es=0 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@652 -- # valid_exec_arg create_device nvme 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@638 -- # local arg=create_device 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@642 -- # type -t create_device 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.757 00:21:28 sma.sma_plugins -- common/autotest_common.sh@653 -- # create_device nvme 00:13:57.757 00:21:28 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:58.016 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:58.016 I0000 00:00:1728426088.478718 2101110 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:58.016 I0000 00:00:1728426088.480190 2101110 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:58.016 Traceback (most recent call last): 00:13:58.016 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:13:58.016 main(sys.argv[1:]) 00:13:58.016 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:13:58.016 result = client.call(request['method'], request.get('params', {})) 00:13:58.016 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:13:58.016 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:13:58.016 response = func(request=json_format.ParseDict(params, input())) 00:13:58.016 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:13:58.016 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:13:58.016 return _end_unary_response_blocking(state, call, False, None) 00:13:58.016 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:13:58.016 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:13:58.016 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:13:58.016 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:13:58.016 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:13:58.016 status = StatusCode.INVALID_ARGUMENT 00:13:58.016 details = "Unsupported device type" 00:13:58.016 debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-10-09T00:21:28.482310015+02:00", grpc_status:3, grpc_message:"Unsupported device type"}" 00:13:58.016 > 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@653 -- # es=1 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.016 00:21:28 sma.sma_plugins -- sma/plugins.sh@67 -- # killprocess 2100832 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2100832 ']' 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2100832 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100832 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100832' 00:13:58.016 killing process with pid 2100832 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2100832 00:13:58.016 00:21:28 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2100832 00:13:58.016 00:21:28 sma.sma_plugins -- sma/plugins.sh@80 -- # smapid=2101139 00:13:58.016 00:21:28 sma.sma_plugins -- sma/plugins.sh@81 -- # sma_waitforlisten 00:13:58.016 00:21:28 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:13:58.016 00:21:28 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:13:58.016 00:21:28 sma.sma_plugins -- sma/plugins.sh@70 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:13:58.016 00:21:28 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:13:58.016 00:21:28 sma.sma_plugins -- sma/plugins.sh@70 -- # cat 00:13:58.016 00:21:28 sma.sma_plugins -- sma/plugins.sh@70 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:13:58.016 00:21:28 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:58.016 00:21:28 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:58.016 00:21:28 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:13:58.275 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:58.275 I0000 00:00:1728426088.787157 2101139 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:59.211 00:21:29 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:13:59.211 00:21:29 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:59.211 00:21:29 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:59.211 00:21:29 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:13:59.211 00:21:29 sma.sma_plugins -- sma/plugins.sh@83 -- # create_device nvme 00:13:59.211 00:21:29 sma.sma_plugins -- sma/plugins.sh@83 -- # jq -r .handle 00:13:59.211 00:21:29 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:59.469 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:59.469 I0000 00:00:1728426089.846251 2101392 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:59.469 I0000 00:00:1728426089.847796 2101392 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:59.469 00:21:29 sma.sma_plugins -- sma/plugins.sh@83 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]] 00:13:59.469 00:21:29 sma.sma_plugins -- sma/plugins.sh@84 -- # create_device nvmf_tcp 00:13:59.469 00:21:29 sma.sma_plugins -- sma/plugins.sh@84 -- # jq -r .handle 00:13:59.469 00:21:29 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:13:59.469 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:59.469 I0000 00:00:1728426090.056861 2101423 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:13:59.469 I0000 00:00:1728426090.058384 2101423 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:13:59.469 00:21:30 sma.sma_plugins -- sma/plugins.sh@84 -- # [[ nvmf_tcp:plugin1-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:13:59.469 00:21:30 sma.sma_plugins -- sma/plugins.sh@86 -- # killprocess 2101139 00:13:59.469 00:21:30 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2101139 ']' 00:13:59.469 00:21:30 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2101139 00:13:59.469 00:21:30 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:13:59.469 00:21:30 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.469 00:21:30 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101139 00:13:59.727 00:21:30 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:13:59.727 00:21:30 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:13:59.727 00:21:30 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101139' 00:13:59.727 killing process with pid 2101139 00:13:59.727 00:21:30 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2101139 00:13:59.727 00:21:30 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2101139 00:13:59.727 00:21:30 sma.sma_plugins -- sma/plugins.sh@99 -- # smapid=2101449 00:13:59.727 00:21:30 sma.sma_plugins -- sma/plugins.sh@100 -- # sma_waitforlisten 00:13:59.727 00:21:30 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:13:59.727 00:21:30 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:13:59.727 00:21:30 sma.sma_plugins -- sma/plugins.sh@89 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:13:59.727 00:21:30 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:13:59.727 00:21:30 sma.sma_plugins -- sma/plugins.sh@89 -- # cat 00:13:59.727 00:21:30 sma.sma_plugins -- sma/plugins.sh@89 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:13:59.727 00:21:30 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:13:59.727 00:21:30 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:13:59.727 00:21:30 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:13:59.985 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:13:59.985 I0000 00:00:1728426090.383645 2101449 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:00.918 00:21:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:14:00.918 00:21:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:00.918 00:21:31 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:00.918 00:21:31 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@102 -- # create_device nvme 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@102 -- # jq -r .handle 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:00.918 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:00.918 I0000 00:00:1728426091.432134 2101701 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:00.918 I0000 00:00:1728426091.433697 2101701 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@102 -- # [[ nvme:plugin2-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\1\:\n\o\p ]] 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@103 -- # create_device nvmf_tcp 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@103 -- # jq -r .handle 00:14:00.918 00:21:31 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:01.176 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:01.176 I0000 00:00:1728426091.644158 2101729 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:01.176 I0000 00:00:1728426091.645637 2101729 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@103 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@105 -- # killprocess 2101449 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2101449 ']' 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2101449 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101449 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101449' 00:14:01.176 killing process with pid 2101449 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2101449 00:14:01.176 00:21:31 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2101449 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@118 -- # smapid=2101812 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@119 -- # sma_waitforlisten 00:14:01.176 00:21:31 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:14:01.176 00:21:31 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@108 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@108 -- # cat 00:14:01.176 00:21:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:14:01.176 00:21:31 sma.sma_plugins -- sma/plugins.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:14:01.176 00:21:31 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:01.176 00:21:31 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:01.434 00:21:31 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:14:01.434 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:01.434 I0000 00:00:1728426091.996160 2101812 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:02.367 00:21:32 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:14:02.367 00:21:32 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:02.367 00:21:32 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:02.367 00:21:32 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:14:02.367 00:21:32 sma.sma_plugins -- sma/plugins.sh@121 -- # create_device nvme 00:14:02.367 00:21:32 sma.sma_plugins -- sma/plugins.sh@121 -- # jq -r .handle 00:14:02.367 00:21:32 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:02.626 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:02.626 I0000 00:00:1728426093.051281 2102014 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:02.626 I0000 00:00:1728426093.052765 2102014 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:02.626 00:21:33 sma.sma_plugins -- sma/plugins.sh@121 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]] 00:14:02.626 00:21:33 sma.sma_plugins -- sma/plugins.sh@122 -- # create_device nvmf_tcp 00:14:02.626 00:21:33 sma.sma_plugins -- sma/plugins.sh@122 -- # jq -r .handle 00:14:02.626 00:21:33 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:02.884 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:02.884 I0000 00:00:1728426093.261105 2102053 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:02.884 I0000 00:00:1728426093.262528 2102053 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:02.884 00:21:33 sma.sma_plugins -- sma/plugins.sh@122 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:14:02.884 00:21:33 sma.sma_plugins -- sma/plugins.sh@124 -- # killprocess 2101812 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2101812 ']' 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2101812 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2101812 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2101812' 00:14:02.884 killing process with pid 2101812 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2101812 00:14:02.884 00:21:33 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2101812 00:14:02.884 00:21:33 sma.sma_plugins -- sma/plugins.sh@134 -- # smapid=2102248 00:14:02.884 00:21:33 sma.sma_plugins -- sma/plugins.sh@135 -- # sma_waitforlisten 00:14:02.884 00:21:33 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:14:02.884 00:21:33 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:14:02.884 00:21:33 sma.sma_plugins -- sma/plugins.sh@127 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:14:02.884 00:21:33 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:14:02.884 00:21:33 sma.sma_plugins -- sma/plugins.sh@127 -- # cat 00:14:02.885 00:21:33 sma.sma_plugins -- sma/plugins.sh@127 -- # SMA_PLUGINS=plugin1:plugin2 00:14:02.885 00:21:33 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:02.885 00:21:33 sma.sma_plugins -- sma/plugins.sh@127 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:14:02.885 00:21:33 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:02.885 00:21:33 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:14:03.143 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:03.143 I0000 00:00:1728426093.581850 2102248 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:04.077 00:21:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:14:04.077 00:21:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:04.077 00:21:34 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:04.077 00:21:34 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@137 -- # create_device nvme 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@137 -- # jq -r .handle 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:04.077 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:04.077 I0000 00:00:1728426094.648008 2102328 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:04.077 I0000 00:00:1728426094.649500 2102328 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@137 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]] 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@138 -- # create_device nvmf_tcp 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:04.077 00:21:34 sma.sma_plugins -- sma/plugins.sh@138 -- # jq -r .handle 00:14:04.335 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:04.335 I0000 00:00:1728426094.862151 2102476 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:04.335 I0000 00:00:1728426094.863622 2102476 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:04.335 00:21:34 sma.sma_plugins -- sma/plugins.sh@138 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:14:04.335 00:21:34 sma.sma_plugins -- sma/plugins.sh@140 -- # killprocess 2102248 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2102248 ']' 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2102248 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102248 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102248' 00:14:04.335 killing process with pid 2102248 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2102248 00:14:04.335 00:21:34 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2102248 00:14:04.595 00:21:34 sma.sma_plugins -- sma/plugins.sh@152 -- # smapid=2102588 00:14:04.595 00:21:34 sma.sma_plugins -- sma/plugins.sh@153 -- # sma_waitforlisten 00:14:04.595 00:21:34 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:14:04.595 00:21:34 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:14:04.595 00:21:34 sma.sma_plugins -- sma/plugins.sh@143 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:14:04.595 00:21:34 sma.sma_plugins -- sma/plugins.sh@143 -- # cat 00:14:04.595 00:21:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:14:04.595 00:21:34 sma.sma_plugins -- sma/plugins.sh@143 -- # SMA_PLUGINS=plugin1 00:14:04.595 00:21:34 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:04.595 00:21:34 sma.sma_plugins -- sma/plugins.sh@143 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:14:04.595 00:21:34 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:04.595 00:21:35 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:14:04.595 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:04.595 I0000 00:00:1728426095.194662 2102588 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:05.544 00:21:36 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:14:05.544 00:21:36 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:05.544 00:21:36 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:05.544 00:21:36 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:14:05.545 00:21:36 sma.sma_plugins -- sma/plugins.sh@155 -- # create_device nvme 00:14:05.545 00:21:36 sma.sma_plugins -- sma/plugins.sh@155 -- # jq -r .handle 00:14:05.545 00:21:36 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:05.808 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:05.808 I0000 00:00:1728426096.265747 2102767 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:05.808 I0000 00:00:1728426096.267207 2102767 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:05.808 00:21:36 sma.sma_plugins -- sma/plugins.sh@155 -- # [[ nvme:plugin1-device1:nop == \n\v\m\e\:\p\l\u\g\i\n\1\-\d\e\v\i\c\e\1\:\n\o\p ]] 00:14:05.808 00:21:36 sma.sma_plugins -- sma/plugins.sh@156 -- # create_device nvmf_tcp 00:14:05.808 00:21:36 sma.sma_plugins -- sma/plugins.sh@156 -- # jq -r .handle 00:14:05.808 00:21:36 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:06.067 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:06.067 I0000 00:00:1728426096.505802 2102869 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:06.067 I0000 00:00:1728426096.507184 2102869 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@156 -- # [[ nvmf_tcp:plugin2-device2:nop == \n\v\m\f\_\t\c\p\:\p\l\u\g\i\n\2\-\d\e\v\i\c\e\2\:\n\o\p ]] 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@158 -- # killprocess 2102588 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2102588 ']' 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2102588 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102588 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102588' 00:14:06.067 killing process with pid 2102588 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2102588 00:14:06.067 00:21:36 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2102588 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@161 -- # crypto_engines=(crypto-plugin1 crypto-plugin2) 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}" 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=2102895 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten 00:14:06.067 00:21:36 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:14:06.067 00:21:36 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:14:06.067 00:21:36 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@163 -- # cat 00:14:06.067 00:21:36 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:14:06.067 00:21:36 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:06.067 00:21:36 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:06.067 00:21:36 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:14:06.326 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:06.326 I0000 00:00:1728426096.846967 2102895 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:07.261 00:21:37 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:14:07.261 00:21:37 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:07.261 00:21:37 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:07.261 00:21:37 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:14:07.261 00:21:37 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme 00:14:07.261 00:21:37 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle 00:14:07.261 00:21:37 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:07.520 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:07.520 I0000 00:00:1728426097.907407 2103150 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:07.520 I0000 00:00:1728426097.908928 2103150 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:07.520 00:21:37 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin1 == nvme:plugin1-device1:crypto-plugin1 ]] 00:14:07.520 00:21:37 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp 00:14:07.520 00:21:37 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle 00:14:07.520 00:21:37 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:07.520 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:07.520 I0000 00:00:1728426098.141715 2103181 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:07.520 I0000 00:00:1728426098.143192 2103181 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:07.778 00:21:38 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin1 == nvmf_tcp:plugin2-device2:crypto-plugin1 ]] 00:14:07.778 00:21:38 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 2102895 00:14:07.778 00:21:38 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2102895 ']' 00:14:07.778 00:21:38 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2102895 00:14:07.778 00:21:38 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:07.778 00:21:38 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.778 00:21:38 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2102895 00:14:07.779 00:21:38 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:14:07.779 00:21:38 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:14:07.779 00:21:38 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2102895' 00:14:07.779 killing process with pid 2102895 00:14:07.779 00:21:38 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2102895 00:14:07.779 00:21:38 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2102895 00:14:07.779 00:21:38 sma.sma_plugins -- sma/plugins.sh@162 -- # for crypto in "${crypto_engines[@]}" 00:14:07.779 00:21:38 sma.sma_plugins -- sma/plugins.sh@175 -- # smapid=2103208 00:14:07.779 00:21:38 sma.sma_plugins -- sma/plugins.sh@176 -- # sma_waitforlisten 00:14:07.779 00:21:38 sma.sma_plugins -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:14:07.779 00:21:38 sma.sma_plugins -- sma/common.sh@8 -- # local sma_port=8080 00:14:07.779 00:21:38 sma.sma_plugins -- sma/plugins.sh@163 -- # PYTHONPATH=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/plugins 00:14:07.779 00:21:38 sma.sma_plugins -- sma/common.sh@10 -- # (( i = 0 )) 00:14:07.779 00:21:38 sma.sma_plugins -- sma/plugins.sh@163 -- # cat 00:14:07.779 00:21:38 sma.sma_plugins -- sma/plugins.sh@163 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:14:07.779 00:21:38 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:07.779 00:21:38 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:07.779 00:21:38 sma.sma_plugins -- sma/common.sh@14 -- # sleep 1s 00:14:08.038 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:08.038 I0000 00:00:1728426098.479750 2103208 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:08.973 00:21:39 sma.sma_plugins -- sma/common.sh@10 -- # (( i++ )) 00:14:08.973 00:21:39 sma.sma_plugins -- sma/common.sh@10 -- # (( i < 5 )) 00:14:08.973 00:21:39 sma.sma_plugins -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:08.973 00:21:39 sma.sma_plugins -- sma/common.sh@12 -- # return 0 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@178 -- # create_device nvme 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@178 -- # jq -r .handle 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:08.973 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:08.973 I0000 00:00:1728426099.546559 2103460 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:08.973 I0000 00:00:1728426099.548134 2103460 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@178 -- # [[ nvme:plugin1-device1:crypto-plugin2 == nvme:plugin1-device1:crypto-plugin2 ]] 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@179 -- # create_device nvmf_tcp 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@179 -- # jq -r .handle 00:14:08.973 00:21:39 sma.sma_plugins -- sma/plugins.sh@18 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:09.241 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:09.241 I0000 00:00:1728426099.777793 2103488 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:09.241 I0000 00:00:1728426099.779276 2103488 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:09.241 00:21:39 sma.sma_plugins -- sma/plugins.sh@179 -- # [[ nvmf_tcp:plugin2-device2:crypto-plugin2 == nvmf_tcp:plugin2-device2:crypto-plugin2 ]] 00:14:09.241 00:21:39 sma.sma_plugins -- sma/plugins.sh@181 -- # killprocess 2103208 00:14:09.241 00:21:39 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2103208 ']' 00:14:09.241 00:21:39 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2103208 00:14:09.241 00:21:39 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:09.241 00:21:39 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.241 00:21:39 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103208 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=python3 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103208' 00:14:09.499 killing process with pid 2103208 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2103208 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2103208 00:14:09.499 00:21:39 sma.sma_plugins -- sma/plugins.sh@184 -- # cleanup 00:14:09.499 00:21:39 sma.sma_plugins -- sma/plugins.sh@13 -- # killprocess 2100295 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2100295 ']' 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2100295 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@955 -- # uname 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2100295 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2100295' 00:14:09.499 killing process with pid 2100295 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@969 -- # kill 2100295 00:14:09.499 00:21:39 sma.sma_plugins -- common/autotest_common.sh@974 -- # wait 2100295 00:14:12.030 00:21:42 sma.sma_plugins -- sma/plugins.sh@14 -- # killprocess 2103208 00:14:12.030 00:21:42 sma.sma_plugins -- common/autotest_common.sh@950 -- # '[' -z 2103208 ']' 00:14:12.030 00:21:42 sma.sma_plugins -- common/autotest_common.sh@954 -- # kill -0 2103208 00:14:12.030 /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2103208) - No such process 00:14:12.030 00:21:42 sma.sma_plugins -- common/autotest_common.sh@977 -- # echo 'Process with pid 2103208 is not found' 00:14:12.030 Process with pid 2103208 is not found 00:14:12.030 00:21:42 sma.sma_plugins -- sma/plugins.sh@185 -- # trap - SIGINT SIGTERM EXIT 00:14:12.030 00:14:12.030 real 0m18.251s 00:14:12.030 user 0m24.301s 00:14:12.030 sys 0m1.993s 00:14:12.030 00:21:42 sma.sma_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.030 00:21:42 sma.sma_plugins -- common/autotest_common.sh@10 -- # set +x 00:14:12.030 ************************************ 00:14:12.030 END TEST sma_plugins 00:14:12.030 ************************************ 00:14:12.030 00:21:42 sma -- sma/sma.sh@14 -- # run_test sma_discovery /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh 00:14:12.030 00:21:42 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:12.030 00:21:42 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.030 00:21:42 sma -- common/autotest_common.sh@10 -- # set +x 00:14:12.030 ************************************ 00:14:12.030 START TEST sma_discovery 00:14:12.030 ************************************ 00:14:12.030 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/discovery.sh 00:14:12.030 * Looking for test storage... 00:14:12.030 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:14:12.030 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:12.030 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:12.030 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:12.290 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@345 -- # : 1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@353 -- # local d=1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@355 -- # echo 1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@353 -- # local d=2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@355 -- # echo 2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.290 00:21:42 sma.sma_discovery -- scripts/common.sh@368 -- # return 0 00:14:12.290 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.290 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.290 --rc genhtml_branch_coverage=1 00:14:12.290 --rc genhtml_function_coverage=1 00:14:12.290 --rc genhtml_legend=1 00:14:12.290 --rc geninfo_all_blocks=1 00:14:12.290 --rc geninfo_unexecuted_blocks=1 00:14:12.290 00:14:12.290 ' 00:14:12.290 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.290 --rc genhtml_branch_coverage=1 00:14:12.290 --rc genhtml_function_coverage=1 00:14:12.290 --rc genhtml_legend=1 00:14:12.290 --rc geninfo_all_blocks=1 00:14:12.290 --rc geninfo_unexecuted_blocks=1 00:14:12.290 00:14:12.290 ' 00:14:12.290 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.290 --rc genhtml_branch_coverage=1 00:14:12.290 --rc genhtml_function_coverage=1 00:14:12.290 --rc genhtml_legend=1 00:14:12.290 --rc geninfo_all_blocks=1 00:14:12.290 --rc geninfo_unexecuted_blocks=1 00:14:12.290 00:14:12.290 ' 00:14:12.290 00:21:42 sma.sma_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:12.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.290 --rc genhtml_branch_coverage=1 00:14:12.290 --rc genhtml_function_coverage=1 00:14:12.290 --rc genhtml_legend=1 00:14:12.290 --rc geninfo_all_blocks=1 00:14:12.290 --rc geninfo_unexecuted_blocks=1 00:14:12.290 00:14:12.290 ' 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@12 -- # sma_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@15 -- # t1sock=/var/tmp/spdk.sock1 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@16 -- # t2sock=/var/tmp/spdk.sock2 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@17 -- # invalid_port=8008 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@18 -- # t1dscport=8009 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@19 -- # t2dscport1=8010 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@20 -- # t2dscport2=8011 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@21 -- # t1nqn=nqn.2016-06.io.spdk:node1 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@22 -- # t2nqn=nqn.2016-06.io.spdk:node2 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@24 -- # cleanup_period=1 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@132 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@136 -- # t1pid=2104026 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@135 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock1 -m 0x1 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@138 -- # t2pid=2104027 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@137 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@142 -- # tgtpid=2104028 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@141 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x4 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@153 -- # smapid=2104029 00:14:12.290 00:21:42 sma.sma_discovery -- sma/discovery.sh@155 -- # waitforlisten 2104028 00:14:12.291 00:21:42 sma.sma_discovery -- common/autotest_common.sh@831 -- # '[' -z 2104028 ']' 00:14:12.291 00:21:42 sma.sma_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.291 00:21:42 sma.sma_discovery -- sma/discovery.sh@145 -- # cat 00:14:12.291 00:21:42 sma.sma_discovery -- sma/discovery.sh@145 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:14:12.291 00:21:42 sma.sma_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.291 00:21:42 sma.sma_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.291 00:21:42 sma.sma_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.291 00:21:42 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:12.291 [2024-10-09 00:21:42.782355] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:12.291 [2024-10-09 00:21:42.782447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104028 ] 00:14:12.291 [2024-10-09 00:21:42.786451] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:12.291 [2024-10-09 00:21:42.786457] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:14:12.291 [2024-10-09 00:21:42.786545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --[2024-10-09 00:21:42.786546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.ealfile-prefix=spdk_pid2104026 ] 00:14:12.291 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104027 ] 00:14:12.291 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.291 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.291 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.291 [2024-10-09 00:21:42.906298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.291 [2024-10-09 00:21:42.916933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.291 [2024-10-09 00:21:42.916946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.548 [2024-10-09 00:21:43.117578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.548 [2024-10-09 00:21:43.130243] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.548 [2024-10-09 00:21:43.141985] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.485 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:13.485 I0000 00:00:1728426103.962788 2104029 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:13.485 00:21:43 sma.sma_discovery -- sma/discovery.sh@156 -- # waitforlisten 2104026 /var/tmp/spdk.sock1 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@831 -- # '[' -z 2104026 ']' 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock1 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1...' 00:14:13.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock1... 00:14:13.485 [2024-10-09 00:21:43.975663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.485 00:21:43 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:13.743 00:21:44 sma.sma_discovery -- sma/discovery.sh@157 -- # waitforlisten 2104027 /var/tmp/spdk.sock2 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@831 -- # '[' -z 2104027 ']' 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock2 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...' 00:14:13.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2... 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.743 00:21:44 sma.sma_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:14.000 00:21:44 sma.sma_discovery -- sma/discovery.sh@162 -- # uuidgen 00:14:14.001 00:21:44 sma.sma_discovery -- sma/discovery.sh@162 -- # t1uuid=19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:14.001 00:21:44 sma.sma_discovery -- sma/discovery.sh@163 -- # uuidgen 00:14:14.001 00:21:44 sma.sma_discovery -- sma/discovery.sh@163 -- # t2uuid=a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:14.001 00:21:44 sma.sma_discovery -- sma/discovery.sh@164 -- # uuidgen 00:14:14.001 00:21:44 sma.sma_discovery -- sma/discovery.sh@164 -- # t2uuid2=39666654-2047-4fce-a292-09d39833e871 00:14:14.001 00:21:44 sma.sma_discovery -- sma/discovery.sh@166 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 00:14:14.001 [2024-10-09 00:21:44.566989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.001 [2024-10-09 00:21:44.607354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:14:14.001 [2024-10-09 00:21:44.615204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 *** 00:14:14.001 null0 00:14:14.259 00:21:44 sma.sma_discovery -- sma/discovery.sh@176 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 00:14:14.259 [2024-10-09 00:21:44.809619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.259 [2024-10-09 00:21:44.865971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:14:14.259 [2024-10-09 00:21:44.873916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8010 *** 00:14:14.259 [2024-10-09 00:21:44.881896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8011 *** 00:14:14.259 null0 00:14:14.259 null1 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@190 -- # sma_waitforlisten 00:14:14.532 00:21:44 sma.sma_discovery -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:14:14.532 00:21:44 sma.sma_discovery -- sma/common.sh@8 -- # local sma_port=8080 00:14:14.532 00:21:44 sma.sma_discovery -- sma/common.sh@10 -- # (( i = 0 )) 00:14:14.532 00:21:44 sma.sma_discovery -- sma/common.sh@10 -- # (( i < 5 )) 00:14:14.532 00:21:44 sma.sma_discovery -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:14:14.532 00:21:44 sma.sma_discovery -- sma/common.sh@12 -- # return 0 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@192 -- # localnqn=nqn.2016-06.io.spdk:local0 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@195 -- # create_device nqn.2016-06.io.spdk:local0 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@195 -- # jq -r .handle 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id= 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume= 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@73 -- # shift 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]] 00:14:14.532 00:21:44 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:14.532 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:14.532 I0000 00:00:1728426105.133952 2104517 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:14.532 I0000 00:00:1728426105.135391 2104517 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:14.532 [2024-10-09 00:21:45.155275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@195 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@198 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:14.793 [ 00:14:14.793 { 00:14:14.793 "nqn": "nqn.2016-06.io.spdk:local0", 00:14:14.793 "subtype": "NVMe", 00:14:14.793 "listen_addresses": [ 00:14:14.793 { 00:14:14.793 "trtype": "TCP", 00:14:14.793 "adrfam": "IPv4", 00:14:14.793 "traddr": "127.0.0.1", 00:14:14.793 "trsvcid": "4419" 00:14:14.793 } 00:14:14.793 ], 00:14:14.793 "allow_any_host": false, 00:14:14.793 "hosts": [], 00:14:14.793 "serial_number": "00000000000000000000", 00:14:14.793 "model_number": "SPDK bdev Controller", 00:14:14.793 "max_namespaces": 32, 00:14:14.793 "min_cntlid": 1, 00:14:14.793 "max_cntlid": 65519, 00:14:14.793 "namespaces": [] 00:14:14.793 } 00:14:14.793 ] 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@201 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 8009 8010 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 19c9068a-e4e4-4460-91b7-2110bcf6ed49 8009 8010 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:14.793 00:21:45 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:14.793 00:21:45 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010') 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@44 -- # echo , 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:15.051 00:21:45 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:15.051 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:15.051 I0000 00:00:1728426105.625588 2104546 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:15.051 I0000 00:00:1728426105.627164 2104546 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:17.579 {} 00:14:17.579 00:21:47 sma.sma_discovery -- sma/discovery.sh@204 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:17.579 00:21:47 sma.sma_discovery -- sma/discovery.sh@204 -- # jq -r '. | length' 00:14:17.579 00:21:48 sma.sma_discovery -- sma/discovery.sh@204 -- # [[ 2 -eq 2 ]] 00:14:17.579 00:21:48 sma.sma_discovery -- sma/discovery.sh@206 -- # grep 8009 00:14:17.579 00:21:48 sma.sma_discovery -- sma/discovery.sh@206 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:17.579 00:21:48 sma.sma_discovery -- sma/discovery.sh@206 -- # jq -r '.[].trid.trsvcid' 00:14:17.838 8009 00:14:17.838 00:21:48 sma.sma_discovery -- sma/discovery.sh@207 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:17.838 00:21:48 sma.sma_discovery -- sma/discovery.sh@207 -- # jq -r '.[].trid.trsvcid' 00:14:17.838 00:21:48 sma.sma_discovery -- sma/discovery.sh@207 -- # grep 8010 00:14:18.096 8010 00:14:18.096 00:21:48 sma.sma_discovery -- sma/discovery.sh@210 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:18.096 00:21:48 sma.sma_discovery -- sma/discovery.sh@210 -- # jq -r '.[].namespaces | length' 00:14:18.096 00:21:48 sma.sma_discovery -- sma/discovery.sh@210 -- # [[ 1 -eq 1 ]] 00:14:18.096 00:21:48 sma.sma_discovery -- sma/discovery.sh@211 -- # jq -r '.[].namespaces[0].uuid' 00:14:18.096 00:21:48 sma.sma_discovery -- sma/discovery.sh@211 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@211 -- # [[ 19c9068a-e4e4-4460-91b7-2110bcf6ed49 == \1\9\c\9\0\6\8\a\-\e\4\e\4\-\4\4\6\0\-\9\1\b\7\-\2\1\1\0\b\c\f\6\e\d\4\9 ]] 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@214 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8010 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8010 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:18.354 00:21:48 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010') 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:18.354 00:21:48 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:18.612 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:18.612 I0000 00:00:1728426109.101820 2105257 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:18.612 I0000 00:00:1728426109.103273 2105257 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:18.612 {} 00:14:18.612 00:21:49 sma.sma_discovery -- sma/discovery.sh@217 -- # jq -r '. | length' 00:14:18.612 00:21:49 sma.sma_discovery -- sma/discovery.sh@217 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:18.870 00:21:49 sma.sma_discovery -- sma/discovery.sh@217 -- # [[ 2 -eq 2 ]] 00:14:18.870 00:21:49 sma.sma_discovery -- sma/discovery.sh@218 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:18.870 00:21:49 sma.sma_discovery -- sma/discovery.sh@218 -- # jq -r '.[].namespaces | length' 00:14:19.128 00:21:49 sma.sma_discovery -- sma/discovery.sh@218 -- # [[ 2 -eq 2 ]] 00:14:19.128 00:21:49 sma.sma_discovery -- sma/discovery.sh@219 -- # grep 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:19.128 00:21:49 sma.sma_discovery -- sma/discovery.sh@219 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:19.128 00:21:49 sma.sma_discovery -- sma/discovery.sh@219 -- # jq -r '.[].namespaces[].uuid' 00:14:19.385 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:19.385 00:21:49 sma.sma_discovery -- sma/discovery.sh@220 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:19.385 00:21:49 sma.sma_discovery -- sma/discovery.sh@220 -- # jq -r '.[].namespaces[].uuid' 00:14:19.385 00:21:49 sma.sma_discovery -- sma/discovery.sh@220 -- # grep a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:19.385 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:19.385 00:21:49 sma.sma_discovery -- sma/discovery.sh@223 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:19.385 00:21:49 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:19.385 00:21:49 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:19.385 00:21:49 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:19.643 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:19.643 I0000 00:00:1728426110.218110 2105512 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:19.643 I0000 00:00:1728426110.219583 2105512 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:19.643 {} 00:14:19.903 00:21:50 sma.sma_discovery -- sma/discovery.sh@227 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:19.903 00:21:50 sma.sma_discovery -- sma/discovery.sh@227 -- # jq -r '. | length' 00:14:19.903 00:21:50 sma.sma_discovery -- sma/discovery.sh@227 -- # [[ 1 -eq 1 ]] 00:14:19.903 00:21:50 sma.sma_discovery -- sma/discovery.sh@228 -- # grep 8010 00:14:19.903 00:21:50 sma.sma_discovery -- sma/discovery.sh@228 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:19.903 00:21:50 sma.sma_discovery -- sma/discovery.sh@228 -- # jq -r '.[].trid.trsvcid' 00:14:20.166 8010 00:14:20.166 00:21:50 sma.sma_discovery -- sma/discovery.sh@230 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:20.166 00:21:50 sma.sma_discovery -- sma/discovery.sh@230 -- # jq -r '.[].namespaces | length' 00:14:20.424 00:21:50 sma.sma_discovery -- sma/discovery.sh@230 -- # [[ 1 -eq 1 ]] 00:14:20.424 00:21:50 sma.sma_discovery -- sma/discovery.sh@231 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:20.424 00:21:50 sma.sma_discovery -- sma/discovery.sh@231 -- # jq -r '.[].namespaces[0].uuid' 00:14:20.681 00:21:51 sma.sma_discovery -- sma/discovery.sh@231 -- # [[ a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 == \a\6\8\7\8\2\f\9\-\0\6\1\e\-\4\a\4\2\-\a\5\a\d\-\f\f\2\c\4\f\4\b\9\0\b\6 ]] 00:14:20.681 00:21:51 sma.sma_discovery -- sma/discovery.sh@234 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:20.681 00:21:51 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:20.681 00:21:51 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:20.681 00:21:51 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:20.681 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:20.681 I0000 00:00:1728426111.300628 2105588 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:20.681 I0000 00:00:1728426111.302113 2105588 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:20.939 {} 00:14:20.940 00:21:51 sma.sma_discovery -- sma/discovery.sh@237 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:20.940 00:21:51 sma.sma_discovery -- sma/discovery.sh@237 -- # jq -r '. | length' 00:14:20.940 00:21:51 sma.sma_discovery -- sma/discovery.sh@237 -- # [[ 0 -eq 0 ]] 00:14:20.940 00:21:51 sma.sma_discovery -- sma/discovery.sh@238 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:20.940 00:21:51 sma.sma_discovery -- sma/discovery.sh@238 -- # jq -r '.[].namespaces | length' 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@238 -- # [[ 0 -eq 0 ]] 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@241 -- # uuidgen 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@241 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 309fc233-c669-4073-a55f-4d2c911864ba 8009 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 309fc233-c669-4073-a55f-4d2c911864ba 8009 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@642 -- # type -t attach_volume 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.198 00:21:51 sma.sma_discovery -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 309fc233-c669-4073-a55f-4d2c911864ba 8009 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 309fc233-c669-4073-a55f-4d2c911864ba 8009 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=309fc233-c669-4073-a55f-4d2c911864ba 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 309fc233-c669-4073-a55f-4d2c911864ba 00:14:21.198 00:21:51 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009') 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:21.198 00:21:51 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:21.456 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:21.456 I0000 00:00:1728426112.000768 2105814 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:21.456 I0000 00:00:1728426112.002259 2105814 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:22.828 [2024-10-09 00:21:53.090235] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:22.828 [2024-10-09 00:21:53.190480] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:22.828 [2024-10-09 00:21:53.290723] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:22.828 [2024-10-09 00:21:53.390967] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.085 [2024-10-09 00:21:53.491212] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.085 [2024-10-09 00:21:53.591458] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.085 [2024-10-09 00:21:53.691704] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.343 [2024-10-09 00:21:53.791948] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.343 [2024-10-09 00:21:53.892193] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.603 [2024-10-09 00:21:53.992439] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.603 [2024-10-09 00:21:54.092685] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 309fc233-c669-4073-a55f-4d2c911864ba 00:14:23.603 [2024-10-09 00:21:54.092708] bdev.c:8400:_bdev_open_async: *ERROR*: Timed out while waiting for bdev '309fc233-c669-4073-a55f-4d2c911864ba' to appear 00:14:23.603 Traceback (most recent call last): 00:14:23.603 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:14:23.603 main(sys.argv[1:]) 00:14:23.603 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:14:23.603 result = client.call(request['method'], request.get('params', {})) 00:14:23.603 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:23.603 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:14:23.603 response = func(request=json_format.ParseDict(params, input())) 00:14:23.603 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:23.603 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:14:23.603 return _end_unary_response_blocking(state, call, False, None) 00:14:23.603 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:23.603 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:14:23.603 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:14:23.603 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:23.603 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:14:23.603 status = StatusCode.NOT_FOUND 00:14:23.603 details = "Volume could not be found" 00:14:23.603 debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-10-09T00:21:54.109564679+02:00", grpc_status:5, grpc_message:"Volume could not be found"}" 00:14:23.603 > 00:14:23.603 00:21:54 sma.sma_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:23.603 00:21:54 sma.sma_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.603 00:21:54 sma.sma_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.603 00:21:54 sma.sma_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.603 00:21:54 sma.sma_discovery -- sma/discovery.sh@242 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:23.603 00:21:54 sma.sma_discovery -- sma/discovery.sh@242 -- # jq -r '. | length' 00:14:23.862 00:21:54 sma.sma_discovery -- sma/discovery.sh@242 -- # [[ 0 -eq 0 ]] 00:14:23.862 00:21:54 sma.sma_discovery -- sma/discovery.sh@243 -- # jq -r '.[].namespaces | length' 00:14:23.862 00:21:54 sma.sma_discovery -- sma/discovery.sh@243 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@243 -- # [[ 0 -eq 0 ]] 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@246 -- # volumes=($t1uuid $t2uuid) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}" 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 8009 8010 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 19c9068a-e4e4-4460-91b7-2110bcf6ed49 8009 8010 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:24.121 00:21:54 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010') 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@44 -- # echo , 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:24.121 00:21:54 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:24.378 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:24.378 I0000 00:00:1728426114.761951 2106302 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:24.378 I0000 00:00:1728426114.763588 2106302 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:26.907 {} 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@247 -- # for volume_id in "${volumes[@]}" 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@248 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8009 8010 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8009 8010 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:26.907 00:21:57 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 8010 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009' '8010') 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@44 -- # echo , 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:26.907 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:26.907 I0000 00:00:1728426117.246132 2106777 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:26.907 I0000 00:00:1728426117.247639 2106777 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:26.907 {} 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@251 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@251 -- # jq -r '. | length' 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@251 -- # [[ 2 -eq 2 ]] 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@252 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@252 -- # jq -r '.[].trid.trsvcid' 00:14:26.907 00:21:57 sma.sma_discovery -- sma/discovery.sh@252 -- # grep 8009 00:14:27.166 8009 00:14:27.166 00:21:57 sma.sma_discovery -- sma/discovery.sh@253 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:27.166 00:21:57 sma.sma_discovery -- sma/discovery.sh@253 -- # jq -r '.[].trid.trsvcid' 00:14:27.166 00:21:57 sma.sma_discovery -- sma/discovery.sh@253 -- # grep 8010 00:14:27.424 8010 00:14:27.424 00:21:57 sma.sma_discovery -- sma/discovery.sh@254 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:27.424 00:21:57 sma.sma_discovery -- sma/discovery.sh@254 -- # jq -r '.[].namespaces[].uuid' 00:14:27.424 00:21:57 sma.sma_discovery -- sma/discovery.sh@254 -- # grep 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:27.682 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:27.683 00:21:58 sma.sma_discovery -- sma/discovery.sh@255 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:27.683 00:21:58 sma.sma_discovery -- sma/discovery.sh@255 -- # jq -r '.[].namespaces[].uuid' 00:14:27.683 00:21:58 sma.sma_discovery -- sma/discovery.sh@255 -- # grep a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:27.940 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:27.940 00:21:58 sma.sma_discovery -- sma/discovery.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:27.940 00:21:58 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:27.940 00:21:58 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:27.940 00:21:58 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:27.940 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:27.940 I0000 00:00:1728426118.551689 2107041 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:27.940 I0000 00:00:1728426118.553253 2107041 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:28.198 {} 00:14:28.198 00:21:58 sma.sma_discovery -- sma/discovery.sh@260 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:28.198 00:21:58 sma.sma_discovery -- sma/discovery.sh@260 -- # jq -r '. | length' 00:14:28.198 00:21:58 sma.sma_discovery -- sma/discovery.sh@260 -- # [[ 2 -eq 2 ]] 00:14:28.198 00:21:58 sma.sma_discovery -- sma/discovery.sh@261 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:28.198 00:21:58 sma.sma_discovery -- sma/discovery.sh@261 -- # jq -r '.[].trid.trsvcid' 00:14:28.198 00:21:58 sma.sma_discovery -- sma/discovery.sh@261 -- # grep 8009 00:14:28.456 8009 00:14:28.456 00:21:58 sma.sma_discovery -- sma/discovery.sh@262 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:28.457 00:21:58 sma.sma_discovery -- sma/discovery.sh@262 -- # jq -r '.[].trid.trsvcid' 00:14:28.457 00:21:58 sma.sma_discovery -- sma/discovery.sh@262 -- # grep 8010 00:14:28.715 8010 00:14:28.715 00:21:59 sma.sma_discovery -- sma/discovery.sh@265 -- # NOT delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@638 -- # local arg=delete_device 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@642 -- # type -t delete_device 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.715 00:21:59 sma.sma_discovery -- common/autotest_common.sh@653 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:28.715 00:21:59 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:28.973 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:28.973 I0000 00:00:1728426119.384245 2107087 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:28.973 I0000 00:00:1728426119.385832 2107087 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:28.973 Traceback (most recent call last): 00:14:28.973 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:14:28.973 main(sys.argv[1:]) 00:14:28.973 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:14:28.973 result = client.call(request['method'], request.get('params', {})) 00:14:28.973 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:28.973 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:14:28.973 response = func(request=json_format.ParseDict(params, input())) 00:14:28.973 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:28.973 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:14:28.973 return _end_unary_response_blocking(state, call, False, None) 00:14:28.973 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:28.973 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:14:28.973 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:14:28.973 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:28.973 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:14:28.973 status = StatusCode.FAILED_PRECONDITION 00:14:28.973 details = "Device has attached volumes" 00:14:28.973 debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-10-09T00:21:59.387937873+02:00", grpc_status:9, grpc_message:"Device has attached volumes"}" 00:14:28.973 > 00:14:28.973 00:21:59 sma.sma_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:28.973 00:21:59 sma.sma_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.973 00:21:59 sma.sma_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.973 00:21:59 sma.sma_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.973 00:21:59 sma.sma_discovery -- sma/discovery.sh@267 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:28.973 00:21:59 sma.sma_discovery -- sma/discovery.sh@267 -- # jq -r '. | length' 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@267 -- # [[ 2 -eq 2 ]] 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@268 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@268 -- # jq -r '.[].trid.trsvcid' 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@268 -- # grep 8009 00:14:29.237 8009 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@269 -- # jq -r '.[].trid.trsvcid' 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@269 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:29.237 00:21:59 sma.sma_discovery -- sma/discovery.sh@269 -- # grep 8010 00:14:29.502 8010 00:14:29.502 00:22:00 sma.sma_discovery -- sma/discovery.sh@272 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:29.502 00:22:00 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:29.502 00:22:00 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:29.502 00:22:00 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:29.760 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:29.760 I0000 00:00:1728426120.244539 2107338 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:29.760 I0000 00:00:1728426120.246283 2107338 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:29.760 {} 00:14:29.760 00:22:00 sma.sma_discovery -- sma/discovery.sh@273 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:29.760 00:22:00 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:30.017 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:30.017 I0000 00:00:1728426120.510345 2107361 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:30.017 I0000 00:00:1728426120.511804 2107361 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:30.017 {} 00:14:30.017 00:22:00 sma.sma_discovery -- sma/discovery.sh@275 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:30.017 00:22:00 sma.sma_discovery -- sma/discovery.sh@275 -- # jq -r '. | length' 00:14:30.275 00:22:00 sma.sma_discovery -- sma/discovery.sh@275 -- # [[ 0 -eq 0 ]] 00:14:30.275 00:22:00 sma.sma_discovery -- sma/discovery.sh@276 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py ]] 00:14:30.275 00:22:00 sma.sma_discovery -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:30.532 [2024-10-09 00:22:00.935783] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:local0' does not exist 00:14:30.532 request: 00:14:30.532 { 00:14:30.532 "nqn": "nqn.2016-06.io.spdk:local0", 00:14:30.532 "method": "nvmf_get_subsystems", 00:14:30.532 "req_id": 1 00:14:30.532 } 00:14:30.532 Got JSON-RPC error response 00:14:30.532 response: 00:14:30.532 { 00:14:30.532 "code": -19, 00:14:30.532 "message": "No such device" 00:14:30.532 } 00:14:30.532 00:22:00 sma.sma_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:30.532 00:22:00 sma.sma_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.532 00:22:00 sma.sma_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.532 00:22:00 sma.sma_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@279 -- # create_device nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 8009 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@279 -- # jq -r .handle 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id=19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume= 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@73 -- # shift 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n 19c9068a-e4e4-4460-91b7-2110bcf6ed49 ]] 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@75 -- # format_volume 19c9068a-e4e4-4460-91b7-2110bcf6ed49 8009 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:30.532 00:22:00 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:30.532 00:22:00 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8009 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8009') 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@75 -- # volume='"volume": { 00:14:30.532 "volume_id": "GckGiuTkRGCRtyEQvPbtSQ==", 00:14:30.532 "nvmf": { 00:14:30.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:30.532 "discovery": { 00:14:30.532 "discovery_endpoints": [ 00:14:30.532 { 00:14:30.532 "trtype": "tcp", 00:14:30.532 "traddr": "127.0.0.1", 00:14:30.532 "trsvcid": "8009" 00:14:30.532 } 00:14:30.532 ] 00:14:30.532 } 00:14:30.532 } 00:14:30.532 },' 00:14:30.532 00:22:01 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:30.792 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:30.792 I0000 00:00:1728426121.189992 2107611 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:30.792 I0000 00:00:1728426121.194583 2107611 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:31.783 [2024-10-09 00:22:02.309841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:14:31.783 00:22:02 sma.sma_discovery -- sma/discovery.sh@279 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:31.783 00:22:02 sma.sma_discovery -- sma/discovery.sh@282 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:31.783 00:22:02 sma.sma_discovery -- sma/discovery.sh@282 -- # jq -r '. | length' 00:14:32.040 00:22:02 sma.sma_discovery -- sma/discovery.sh@282 -- # [[ 1 -eq 1 ]] 00:14:32.040 00:22:02 sma.sma_discovery -- sma/discovery.sh@283 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:32.040 00:22:02 sma.sma_discovery -- sma/discovery.sh@283 -- # jq -r '.[].trid.trsvcid' 00:14:32.040 00:22:02 sma.sma_discovery -- sma/discovery.sh@283 -- # grep 8009 00:14:32.298 8009 00:14:32.298 00:22:02 sma.sma_discovery -- sma/discovery.sh@284 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:32.298 00:22:02 sma.sma_discovery -- sma/discovery.sh@284 -- # jq -r '.[].namespaces | length' 00:14:32.556 00:22:02 sma.sma_discovery -- sma/discovery.sh@284 -- # [[ 1 -eq 1 ]] 00:14:32.556 00:22:02 sma.sma_discovery -- sma/discovery.sh@285 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:32.556 00:22:02 sma.sma_discovery -- sma/discovery.sh@285 -- # jq -r '.[].namespaces[0].uuid' 00:14:32.556 00:22:03 sma.sma_discovery -- sma/discovery.sh@285 -- # [[ 19c9068a-e4e4-4460-91b7-2110bcf6ed49 == \1\9\c\9\0\6\8\a\-\e\4\e\4\-\4\4\6\0\-\9\1\b\7\-\2\1\1\0\b\c\f\6\e\d\4\9 ]] 00:14:32.556 00:22:03 sma.sma_discovery -- sma/discovery.sh@288 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:32.556 00:22:03 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:32.556 00:22:03 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:32.556 00:22:03 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:32.815 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:32.815 I0000 00:00:1728426123.343346 2107886 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:32.815 I0000 00:00:1728426123.345018 2107886 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:32.815 {} 00:14:32.815 00:22:03 sma.sma_discovery -- sma/discovery.sh@290 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:32.815 00:22:03 sma.sma_discovery -- sma/discovery.sh@290 -- # jq -r '. | length' 00:14:33.076 00:22:03 sma.sma_discovery -- sma/discovery.sh@290 -- # [[ 0 -eq 0 ]] 00:14:33.076 00:22:03 sma.sma_discovery -- sma/discovery.sh@291 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:33.076 00:22:03 sma.sma_discovery -- sma/discovery.sh@291 -- # jq -r '.[].namespaces | length' 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@291 -- # [[ 0 -eq 0 ]] 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@294 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8010 8011 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8010 8011 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:33.334 00:22:03 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 8011 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010' '8011') 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@44 -- # echo , 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:33.334 00:22:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:33.335 00:22:03 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:33.335 00:22:03 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 2 )) 00:14:33.335 00:22:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:33.335 00:22:03 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 2 )) 00:14:33.592 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:33.592 I0000 00:00:1728426124.052016 2108136 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:33.592 I0000 00:00:1728426124.053547 2108136 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:34.965 {} 00:14:34.965 00:22:05 sma.sma_discovery -- sma/discovery.sh@297 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:34.965 00:22:05 sma.sma_discovery -- sma/discovery.sh@297 -- # jq -r '. | length' 00:14:34.965 00:22:05 sma.sma_discovery -- sma/discovery.sh@297 -- # [[ 1 -eq 1 ]] 00:14:34.965 00:22:05 sma.sma_discovery -- sma/discovery.sh@298 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:34.965 00:22:05 sma.sma_discovery -- sma/discovery.sh@298 -- # jq -r '.[].namespaces | length' 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@298 -- # [[ 1 -eq 1 ]] 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@299 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@299 -- # jq -r '.[].namespaces[0].uuid' 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@299 -- # [[ a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 == \a\6\8\7\8\2\f\9\-\0\6\1\e\-\4\a\4\2\-\a\5\a\d\-\f\f\2\c\4\f\4\b\9\0\b\6 ]] 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@302 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 39666654-2047-4fce-a292-09d39833e871 8011 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 39666654-2047-4fce-a292-09d39833e871 8011 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=39666654-2047-4fce-a292-09d39833e871 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 39666654-2047-4fce-a292-09d39833e871 00:14:35.224 00:22:05 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8011 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8011') 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:35.224 00:22:05 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:35.481 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:35.481 I0000 00:00:1728426126.009467 2108407 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:35.481 I0000 00:00:1728426126.010822 2108407 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:35.481 {} 00:14:35.481 00:22:06 sma.sma_discovery -- sma/discovery.sh@305 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:35.481 00:22:06 sma.sma_discovery -- sma/discovery.sh@305 -- # jq -r '. | length' 00:14:35.738 00:22:06 sma.sma_discovery -- sma/discovery.sh@305 -- # [[ 1 -eq 1 ]] 00:14:35.738 00:22:06 sma.sma_discovery -- sma/discovery.sh@306 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:35.738 00:22:06 sma.sma_discovery -- sma/discovery.sh@306 -- # jq -r '.[].namespaces | length' 00:14:35.996 00:22:06 sma.sma_discovery -- sma/discovery.sh@306 -- # [[ 2 -eq 2 ]] 00:14:35.996 00:22:06 sma.sma_discovery -- sma/discovery.sh@307 -- # jq -r '.[].namespaces[].uuid' 00:14:35.996 00:22:06 sma.sma_discovery -- sma/discovery.sh@307 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:35.996 00:22:06 sma.sma_discovery -- sma/discovery.sh@307 -- # grep a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:36.255 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:36.255 00:22:06 sma.sma_discovery -- sma/discovery.sh@308 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:36.255 00:22:06 sma.sma_discovery -- sma/discovery.sh@308 -- # jq -r '.[].namespaces[].uuid' 00:14:36.255 00:22:06 sma.sma_discovery -- sma/discovery.sh@308 -- # grep 39666654-2047-4fce-a292-09d39833e871 00:14:36.255 39666654-2047-4fce-a292-09d39833e871 00:14:36.255 00:22:06 sma.sma_discovery -- sma/discovery.sh@311 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:36.255 00:22:06 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:36.255 00:22:06 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:36.255 00:22:06 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:36.512 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:36.512 I0000 00:00:1728426127.096162 2108667 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:36.512 I0000 00:00:1728426127.097750 2108667 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:36.512 [2024-10-09 00:22:07.102182] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:36.512 {} 00:14:36.512 00:22:07 sma.sma_discovery -- sma/discovery.sh@312 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:36.512 00:22:07 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:36.512 00:22:07 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:36.512 00:22:07 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:36.770 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:36.770 I0000 00:00:1728426127.348456 2108691 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:36.770 I0000 00:00:1728426127.349993 2108691 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:36.770 {} 00:14:37.043 00:22:07 sma.sma_discovery -- sma/discovery.sh@313 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 39666654-2047-4fce-a292-09d39833e871 00:14:37.043 00:22:07 sma.sma_discovery -- sma/discovery.sh@121 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:37.043 00:22:07 sma.sma_discovery -- sma/discovery.sh@121 -- # uuid2base64 39666654-2047-4fce-a292-09d39833e871 00:14:37.043 00:22:07 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:37.043 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:37.043 I0000 00:00:1728426127.642138 2108716 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:37.043 I0000 00:00:1728426127.643663 2108716 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:37.305 {} 00:14:37.305 00:22:07 sma.sma_discovery -- sma/discovery.sh@314 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:37.305 00:22:07 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:37.305 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:37.305 I0000 00:00:1728426127.910404 2108846 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:37.305 I0000 00:00:1728426127.911936 2108846 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:37.305 {} 00:14:37.563 00:22:07 sma.sma_discovery -- sma/discovery.sh@315 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:37.563 00:22:07 sma.sma_discovery -- sma/discovery.sh@315 -- # jq -r '. | length' 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@315 -- # [[ 0 -eq 0 ]] 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@317 -- # create_device nqn.2016-06.io.spdk:local0 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@69 -- # local nqn=nqn.2016-06.io.spdk:local0 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@70 -- # local volume_id= 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@71 -- # local volume= 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@73 -- # shift 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@74 -- # [[ -n '' ]] 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@78 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:37.563 00:22:08 sma.sma_discovery -- sma/discovery.sh@317 -- # jq -r .handle 00:14:37.828 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:37.828 I0000 00:00:1728426128.332487 2108987 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:37.828 I0000 00:00:1728426128.334078 2108987 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:37.828 [2024-10-09 00:22:08.354107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:14:37.828 00:22:08 sma.sma_discovery -- sma/discovery.sh@317 -- # device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:37.829 00:22:08 sma.sma_discovery -- sma/discovery.sh@320 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:37.829 00:22:08 sma.sma_discovery -- sma/discovery.sh@320 -- # uuid2base64 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:37.829 00:22:08 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:38.089 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:38.089 I0000 00:00:1728426128.616430 2109008 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:38.089 I0000 00:00:1728426128.617864 2109008 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:39.463 {} 00:14:39.463 00:22:09 sma.sma_discovery -- sma/discovery.sh@345 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:39.463 00:22:09 sma.sma_discovery -- sma/discovery.sh@345 -- # jq -r '. | length' 00:14:39.463 00:22:09 sma.sma_discovery -- sma/discovery.sh@345 -- # [[ 1 -eq 1 ]] 00:14:39.463 00:22:09 sma.sma_discovery -- sma/discovery.sh@346 -- # grep 8009 00:14:39.463 00:22:09 sma.sma_discovery -- sma/discovery.sh@346 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:39.463 00:22:09 sma.sma_discovery -- sma/discovery.sh@346 -- # jq -r '.[].trid.trsvcid' 00:14:39.721 8009 00:14:39.721 00:22:10 sma.sma_discovery -- sma/discovery.sh@347 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:39.721 00:22:10 sma.sma_discovery -- sma/discovery.sh@347 -- # jq -r '.[].namespaces | length' 00:14:39.978 00:22:10 sma.sma_discovery -- sma/discovery.sh@347 -- # [[ 1 -eq 1 ]] 00:14:39.978 00:22:10 sma.sma_discovery -- sma/discovery.sh@348 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:39.978 00:22:10 sma.sma_discovery -- sma/discovery.sh@348 -- # jq -r '.[].namespaces[0].uuid' 00:14:39.978 00:22:10 sma.sma_discovery -- sma/discovery.sh@348 -- # [[ 19c9068a-e4e4-4460-91b7-2110bcf6ed49 == \1\9\c\9\0\6\8\a\-\e\4\e\4\-\4\4\6\0\-\9\1\b\7\-\2\1\1\0\b\c\f\6\e\d\4\9 ]] 00:14:39.978 00:22:10 sma.sma_discovery -- sma/discovery.sh@351 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:39.978 00:22:10 sma.sma_discovery -- sma/discovery.sh@351 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:39.978 00:22:10 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:39.978 00:22:10 sma.sma_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:39.978 00:22:10 sma.sma_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:14:40.236 00:22:10 sma.sma_discovery -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:40.236 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:40.236 I0000 00:00:1728426130.804204 2109462 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:40.236 I0000 00:00:1728426130.805699 2109462 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:41.616 Traceback (most recent call last): 00:14:41.616 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:14:41.616 main(sys.argv[1:]) 00:14:41.616 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:14:41.616 result = client.call(request['method'], request.get('params', {})) 00:14:41.616 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:41.616 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:14:41.616 response = func(request=json_format.ParseDict(params, input())) 00:14:41.616 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:41.616 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:14:41.616 return _end_unary_response_blocking(state, call, False, None) 00:14:41.616 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:41.616 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:14:41.616 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:14:41.617 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:41.617 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:14:41.617 status = StatusCode.INVALID_ARGUMENT 00:14:41.617 details = "Unexpected subsystem NQN" 00:14:41.617 debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {created_time:"2024-10-09T00:22:11.917294556+02:00", grpc_status:3, grpc_message:"Unexpected subsystem NQN"}" 00:14:41.617 > 00:14:41.617 00:22:11 sma.sma_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:41.617 00:22:11 sma.sma_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.617 00:22:11 sma.sma_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.617 00:22:11 sma.sma_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.617 00:22:11 sma.sma_discovery -- sma/discovery.sh@377 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:41.617 00:22:11 sma.sma_discovery -- sma/discovery.sh@377 -- # jq -r '. | length' 00:14:41.617 00:22:12 sma.sma_discovery -- sma/discovery.sh@377 -- # [[ 1 -eq 1 ]] 00:14:41.617 00:22:12 sma.sma_discovery -- sma/discovery.sh@378 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:41.617 00:22:12 sma.sma_discovery -- sma/discovery.sh@378 -- # jq -r '.[].trid.trsvcid' 00:14:41.617 00:22:12 sma.sma_discovery -- sma/discovery.sh@378 -- # grep 8009 00:14:41.875 8009 00:14:41.875 00:22:12 sma.sma_discovery -- sma/discovery.sh@379 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:41.875 00:22:12 sma.sma_discovery -- sma/discovery.sh@379 -- # jq -r '.[].namespaces | length' 00:14:42.133 00:22:12 sma.sma_discovery -- sma/discovery.sh@379 -- # [[ 1 -eq 1 ]] 00:14:42.133 00:22:12 sma.sma_discovery -- sma/discovery.sh@380 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:42.133 00:22:12 sma.sma_discovery -- sma/discovery.sh@380 -- # jq -r '.[].namespaces[0].uuid' 00:14:42.133 00:22:12 sma.sma_discovery -- sma/discovery.sh@380 -- # [[ 19c9068a-e4e4-4460-91b7-2110bcf6ed49 == \1\9\c\9\0\6\8\a\-\e\4\e\4\-\4\4\6\0\-\9\1\b\7\-\2\1\1\0\b\c\f\6\e\d\4\9 ]] 00:14:42.133 00:22:12 sma.sma_discovery -- sma/discovery.sh@383 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.133 00:22:12 sma.sma_discovery -- sma/discovery.sh@383 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:42.133 00:22:12 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.133 00:22:12 sma.sma_discovery -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.392 00:22:12 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.392 00:22:12 sma.sma_discovery -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.392 00:22:12 sma.sma_discovery -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:14:42.392 00:22:12 sma.sma_discovery -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:42.392 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:42.392 I0000 00:00:1728426132.949381 2109759 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:42.392 I0000 00:00:1728426132.950885 2109759 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:47.653 [2024-10-09 00:22:17.976419] bdev_nvme.c:7246:discovery_poller: *ERROR*: Discovery[127.0.0.1:8010] timed out while attaching NVM ctrlrs 00:14:47.653 Traceback (most recent call last): 00:14:47.653 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:14:47.653 main(sys.argv[1:]) 00:14:47.653 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:14:47.653 result = client.call(request['method'], request.get('params', {})) 00:14:47.653 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:47.653 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:14:47.653 response = func(request=json_format.ParseDict(params, input())) 00:14:47.653 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:47.653 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:14:47.653 return _end_unary_response_blocking(state, call, False, None) 00:14:47.653 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:47.653 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:14:47.653 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:14:47.653 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:47.653 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:14:47.653 status = StatusCode.INTERNAL 00:14:47.653 details = "Failed to start discovery" 00:14:47.653 debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-10-09T00:22:17.978401896+02:00"}" 00:14:47.653 > 00:14:47.653 00:22:18 sma.sma_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:47.653 00:22:18 sma.sma_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:47.653 00:22:18 sma.sma_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:47.653 00:22:18 sma.sma_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:47.653 00:22:18 sma.sma_discovery -- sma/discovery.sh@408 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:47.653 00:22:18 sma.sma_discovery -- sma/discovery.sh@408 -- # jq -r '. | length' 00:14:47.653 00:22:18 sma.sma_discovery -- sma/discovery.sh@408 -- # [[ 1 -eq 1 ]] 00:14:47.653 00:22:18 sma.sma_discovery -- sma/discovery.sh@409 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:47.653 00:22:18 sma.sma_discovery -- sma/discovery.sh@409 -- # grep 8009 00:14:47.653 00:22:18 sma.sma_discovery -- sma/discovery.sh@409 -- # jq -r '.[].trid.trsvcid' 00:14:47.912 8009 00:14:47.912 00:22:18 sma.sma_discovery -- sma/discovery.sh@410 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:47.912 00:22:18 sma.sma_discovery -- sma/discovery.sh@410 -- # jq -r '.[].namespaces | length' 00:14:48.170 00:22:18 sma.sma_discovery -- sma/discovery.sh@410 -- # [[ 1 -eq 1 ]] 00:14:48.170 00:22:18 sma.sma_discovery -- sma/discovery.sh@411 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:48.170 00:22:18 sma.sma_discovery -- sma/discovery.sh@411 -- # jq -r '.[].namespaces[0].uuid' 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@411 -- # [[ 19c9068a-e4e4-4460-91b7-2110bcf6ed49 == \1\9\c\9\0\6\8\a\-\e\4\e\4\-\4\4\6\0\-\9\1\b\7\-\2\1\1\0\b\c\f\6\e\d\4\9 ]] 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@414 -- # uuidgen 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@414 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 30dfdd38-888c-430a-996a-0dede7ea3723 8008 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@650 -- # local es=0 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 30dfdd38-888c-430a-996a-0dede7ea3723 8008 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@642 -- # type -t attach_volume 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.428 00:22:18 sma.sma_discovery -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 30dfdd38-888c-430a-996a-0dede7ea3723 8008 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 30dfdd38-888c-430a-996a-0dede7ea3723 8008 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=30dfdd38-888c-430a-996a-0dede7ea3723 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 30dfdd38-888c-430a-996a-0dede7ea3723 00:14:48.428 00:22:18 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8008 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8008') 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:48.428 00:22:18 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:48.428 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:48.428 I0000 00:00:1728426139.057191 2110846 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:48.428 I0000 00:00:1728426139.058593 2110846 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:49.800 [2024-10-09 00:22:20.071948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:49.800 [2024-10-09 00:22:20.072002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032da00 with addr=127.0.0.1, port=8008 00:14:49.800 [2024-10-09 00:22:20.072068] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:49.800 [2024-10-09 00:22:20.072081] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:49.800 [2024-10-09 00:22:20.072093] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect 00:14:50.734 [2024-10-09 00:22:21.074361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:50.734 [2024-10-09 00:22:21.074407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032dc80 with addr=127.0.0.1, port=8008 00:14:50.734 [2024-10-09 00:22:21.074471] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:50.734 [2024-10-09 00:22:21.074483] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:50.734 [2024-10-09 00:22:21.074493] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect 00:14:51.668 [2024-10-09 00:22:22.076772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:51.668 [2024-10-09 00:22:22.076806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032df00 with addr=127.0.0.1, port=8008 00:14:51.668 [2024-10-09 00:22:22.076871] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:51.668 [2024-10-09 00:22:22.076882] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:51.668 [2024-10-09 00:22:22.076891] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect 00:14:52.606 [2024-10-09 00:22:23.079241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:14:52.606 [2024-10-09 00:22:23.079276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500032e180 with addr=127.0.0.1, port=8008 00:14:52.606 [2024-10-09 00:22:23.079337] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:14:52.606 [2024-10-09 00:22:23.079347] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:14:52.606 [2024-10-09 00:22:23.079360] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] could not start discovery connect 00:14:53.540 [2024-10-09 00:22:24.081507] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[127.0.0.1:8008] timed out while attaching discovery ctrlr 00:14:53.540 Traceback (most recent call last): 00:14:53.540 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:14:53.540 main(sys.argv[1:]) 00:14:53.540 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:14:53.540 result = client.call(request['method'], request.get('params', {})) 00:14:53.540 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:53.540 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:14:53.540 response = func(request=json_format.ParseDict(params, input())) 00:14:53.540 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:53.540 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:14:53.540 return _end_unary_response_blocking(state, call, False, None) 00:14:53.540 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:53.540 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:14:53.540 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:14:53.540 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:14:53.540 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:14:53.540 status = StatusCode.INTERNAL 00:14:53.540 details = "Failed to start discovery" 00:14:53.540 debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:8080 {grpc_message:"Failed to start discovery", grpc_status:13, created_time:"2024-10-09T00:22:24.085213544+02:00"}" 00:14:53.540 > 00:14:53.540 00:22:24 sma.sma_discovery -- common/autotest_common.sh@653 -- # es=1 00:14:53.540 00:22:24 sma.sma_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.540 00:22:24 sma.sma_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.540 00:22:24 sma.sma_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.540 00:22:24 sma.sma_discovery -- sma/discovery.sh@415 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:53.540 00:22:24 sma.sma_discovery -- sma/discovery.sh@415 -- # jq -r '. | length' 00:14:53.798 00:22:24 sma.sma_discovery -- sma/discovery.sh@415 -- # [[ 1 -eq 1 ]] 00:14:53.798 00:22:24 sma.sma_discovery -- sma/discovery.sh@416 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:53.798 00:22:24 sma.sma_discovery -- sma/discovery.sh@416 -- # jq -r '.[].trid.trsvcid' 00:14:53.798 00:22:24 sma.sma_discovery -- sma/discovery.sh@416 -- # grep 8009 00:14:54.057 8009 00:14:54.057 00:22:24 sma.sma_discovery -- sma/discovery.sh@420 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node1 1 00:14:54.315 00:22:24 sma.sma_discovery -- sma/discovery.sh@422 -- # sleep 2 00:14:54.573 WARNING:spdk.sma.volume.volume:Found disconnected volume: 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:56.471 00:22:26 sma.sma_discovery -- sma/discovery.sh@423 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:56.471 00:22:26 sma.sma_discovery -- sma/discovery.sh@423 -- # jq -r '. | length' 00:14:56.472 00:22:26 sma.sma_discovery -- sma/discovery.sh@423 -- # [[ 0 -eq 0 ]] 00:14:56.472 00:22:26 sma.sma_discovery -- sma/discovery.sh@424 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node1 19c9068a-e4e4-4460-91b7-2110bcf6ed49 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@428 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8010 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 8010 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:14:56.729 00:22:27 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010') 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:56.729 00:22:27 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:56.988 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:56.988 I0000 00:00:1728426147.380603 2112252 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:56.988 I0000 00:00:1728426147.382128 2112252 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:57.922 {} 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@429 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:local0 39666654-2047-4fce-a292-09d39833e871 8010 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@106 -- # local device_id=nvmf-tcp:nqn.2016-06.io.spdk:local0 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@108 -- # shift 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@109 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@109 -- # format_volume 39666654-2047-4fce-a292-09d39833e871 8010 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@50 -- # local volume_id=39666654-2047-4fce-a292-09d39833e871 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@51 -- # shift 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@53 -- # cat 00:14:57.922 00:22:28 sma.sma_discovery -- sma/discovery.sh@53 -- # uuid2base64 39666654-2047-4fce-a292-09d39833e871 00:14:58.180 00:22:28 sma.sma_discovery -- sma/common.sh@20 -- # python 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@53 -- # format_endpoints 8010 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@34 -- # eps=('8010') 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@34 -- # local eps 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i = 0 )) 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@36 -- # cat 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@43 -- # (( i + 1 == 1 )) 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i++ )) 00:14:58.180 00:22:28 sma.sma_discovery -- sma/discovery.sh@35 -- # (( i < 1 )) 00:14:58.180 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:14:58.180 I0000 00:00:1728426148.784696 2112525 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:14:58.180 I0000 00:00:1728426148.786249 2112525 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:14:58.438 {} 00:14:58.438 00:22:28 sma.sma_discovery -- sma/discovery.sh@430 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:14:58.438 00:22:28 sma.sma_discovery -- sma/discovery.sh@430 -- # jq -r '.[].namespaces | length' 00:14:58.438 00:22:29 sma.sma_discovery -- sma/discovery.sh@430 -- # [[ 2 -eq 2 ]] 00:14:58.438 00:22:29 sma.sma_discovery -- sma/discovery.sh@431 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:14:58.438 00:22:29 sma.sma_discovery -- sma/discovery.sh@431 -- # jq -r '. | length' 00:14:58.695 00:22:29 sma.sma_discovery -- sma/discovery.sh@431 -- # [[ 1 -eq 1 ]] 00:14:58.695 00:22:29 sma.sma_discovery -- sma/discovery.sh@432 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 2 00:14:58.952 00:22:29 sma.sma_discovery -- sma/discovery.sh@434 -- # sleep 2 00:14:58.952 WARNING:spdk.sma.volume.volume:Found disconnected volume: 39666654-2047-4fce-a292-09d39833e871 00:15:00.866 00:22:31 sma.sma_discovery -- sma/discovery.sh@436 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:15:00.866 00:22:31 sma.sma_discovery -- sma/discovery.sh@436 -- # jq -r '.[].namespaces | length' 00:15:01.125 00:22:31 sma.sma_discovery -- sma/discovery.sh@436 -- # [[ 1 -eq 1 ]] 00:15:01.125 00:22:31 sma.sma_discovery -- sma/discovery.sh@437 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:15:01.125 00:22:31 sma.sma_discovery -- sma/discovery.sh@437 -- # jq -r '. | length' 00:15:01.383 00:22:31 sma.sma_discovery -- sma/discovery.sh@437 -- # [[ 1 -eq 1 ]] 00:15:01.383 00:22:31 sma.sma_discovery -- sma/discovery.sh@438 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:node2 1 00:15:01.383 00:22:32 sma.sma_discovery -- sma/discovery.sh@440 -- # sleep 2 00:15:01.949 WARNING:spdk.sma.volume.volume:Found disconnected volume: a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@442 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems nqn.2016-06.io.spdk:local0 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@442 -- # jq -r '.[].namespaces | length' 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@442 -- # [[ 0 -eq 0 ]] 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@443 -- # jq -r '. | length' 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@443 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_discovery_info 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@443 -- # [[ 0 -eq 0 ]] 00:15:03.845 00:22:34 sma.sma_discovery -- sma/discovery.sh@444 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 a68782f9-061e-4a42-a5ad-ff2c4f4b90b6 00:15:04.103 00:22:34 sma.sma_discovery -- sma/discovery.sh@445 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:node2 39666654-2047-4fce-a292-09d39833e871 00:15:04.361 00:22:34 sma.sma_discovery -- sma/discovery.sh@447 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:local0 00:15:04.361 00:22:34 sma.sma_discovery -- sma/discovery.sh@95 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:04.361 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:04.361 I0000 00:00:1728426154.975577 2113527 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:04.361 I0000 00:00:1728426154.977179 2113527 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:04.634 {} 00:15:04.634 00:22:35 sma.sma_discovery -- sma/discovery.sh@449 -- # cleanup 00:15:04.634 00:22:35 sma.sma_discovery -- sma/discovery.sh@27 -- # killprocess 2104029 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@950 -- # '[' -z 2104029 ']' 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@954 -- # kill -0 2104029 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@955 -- # uname 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104029 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@956 -- # process_name=python3 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104029' 00:15:04.634 killing process with pid 2104029 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@969 -- # kill 2104029 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@974 -- # wait 2104029 00:15:04.634 00:22:35 sma.sma_discovery -- sma/discovery.sh@28 -- # killprocess 2104028 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@950 -- # '[' -z 2104028 ']' 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@954 -- # kill -0 2104028 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@955 -- # uname 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104028 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104028' 00:15:04.634 killing process with pid 2104028 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@969 -- # kill 2104028 00:15:04.634 00:22:35 sma.sma_discovery -- common/autotest_common.sh@974 -- # wait 2104028 00:15:07.169 00:22:37 sma.sma_discovery -- sma/discovery.sh@29 -- # killprocess 2104026 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@950 -- # '[' -z 2104026 ']' 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@954 -- # kill -0 2104026 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@955 -- # uname 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104026 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104026' 00:15:07.169 killing process with pid 2104026 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@969 -- # kill 2104026 00:15:07.169 00:22:37 sma.sma_discovery -- common/autotest_common.sh@974 -- # wait 2104026 00:15:09.710 00:22:40 sma.sma_discovery -- sma/discovery.sh@30 -- # killprocess 2104027 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@950 -- # '[' -z 2104027 ']' 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@954 -- # kill -0 2104027 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@955 -- # uname 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2104027 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2104027' 00:15:09.710 killing process with pid 2104027 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@969 -- # kill 2104027 00:15:09.710 00:22:40 sma.sma_discovery -- common/autotest_common.sh@974 -- # wait 2104027 00:15:12.283 00:22:42 sma.sma_discovery -- sma/discovery.sh@450 -- # trap - SIGINT SIGTERM EXIT 00:15:12.283 00:15:12.283 real 1m0.194s 00:15:12.283 user 3m9.460s 00:15:12.283 sys 0m7.511s 00:15:12.283 00:22:42 sma.sma_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.283 00:22:42 sma.sma_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.283 ************************************ 00:15:12.283 END TEST sma_discovery 00:15:12.283 ************************************ 00:15:12.283 00:22:42 sma -- sma/sma.sh@15 -- # run_test sma_vhost /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh 00:15:12.283 00:22:42 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:12.283 00:22:42 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.283 00:22:42 sma -- common/autotest_common.sh@10 -- # set +x 00:15:12.283 ************************************ 00:15:12.283 START TEST sma_vhost 00:15:12.283 ************************************ 00:15:12.283 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh 00:15:12.283 * Looking for test storage... 00:15:12.283 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:15:12.283 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:12.283 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1681 -- # lcov --version 00:15:12.283 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@344 -- # case "$op" in 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@345 -- # : 1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@365 -- # decimal 1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@353 -- # local d=1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@355 -- # echo 1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@366 -- # decimal 2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@353 -- # local d=2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@355 -- # echo 2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.542 00:22:42 sma.sma_vhost -- scripts/common.sh@368 -- # return 0 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:12.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.542 --rc genhtml_branch_coverage=1 00:15:12.542 --rc genhtml_function_coverage=1 00:15:12.542 --rc genhtml_legend=1 00:15:12.542 --rc geninfo_all_blocks=1 00:15:12.542 --rc geninfo_unexecuted_blocks=1 00:15:12.542 00:15:12.542 ' 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:12.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.542 --rc genhtml_branch_coverage=1 00:15:12.542 --rc genhtml_function_coverage=1 00:15:12.542 --rc genhtml_legend=1 00:15:12.542 --rc geninfo_all_blocks=1 00:15:12.542 --rc geninfo_unexecuted_blocks=1 00:15:12.542 00:15:12.542 ' 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:12.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.542 --rc genhtml_branch_coverage=1 00:15:12.542 --rc genhtml_function_coverage=1 00:15:12.542 --rc genhtml_legend=1 00:15:12.542 --rc geninfo_all_blocks=1 00:15:12.542 --rc geninfo_unexecuted_blocks=1 00:15:12.542 00:15:12.542 ' 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:12.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.542 --rc genhtml_branch_coverage=1 00:15:12.542 --rc genhtml_function_coverage=1 00:15:12.542 --rc genhtml_legend=1 00:15:12.542 --rc geninfo_all_blocks=1 00:15:12.542 --rc geninfo_unexecuted_blocks=1 00:15:12.542 00:15:12.542 ' 00:15:12.542 00:22:42 sma.sma_vhost -- sma/vhost_blk.sh@10 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common.sh 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@6 -- # : false 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@7 -- # : /root/vhost_test 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@8 -- # : /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@9 -- # : qemu-img 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@11 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/.. 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@11 -- # TEST_DIR=/var/jenkins/workspace/vfio-user-phy-autotest 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@12 -- # VM_DIR=/root/vhost_test/vms 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@13 -- # TARGET_DIR=/root/vhost_test/vhost 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@14 -- # VM_PASSWORD=root 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@16 -- # VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@17 -- # FIO_BIN=/usr/src/fio-static/fio 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@19 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/vhost_blk.sh 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@19 -- # readlink -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@19 -- # WORKDIR=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@21 -- # hash qemu-img /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@26 -- # mkdir -p /root/vhost_test 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@27 -- # mkdir -p /root/vhost_test/vms 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@28 -- # mkdir -p /root/vhost_test/vhost 00:15:12.542 00:22:42 sma.sma_vhost -- vhost/common.sh@33 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/vhost/common/autotest.config 00:15:12.542 00:22:42 sma.sma_vhost -- common/autotest.config@1 -- # vhost_0_reactor_mask='[0]' 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@2 -- # vhost_0_main_core=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@4 -- # VM_0_qemu_mask=1-2 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@5 -- # VM_0_qemu_numa_node=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@7 -- # VM_1_qemu_mask=3-4 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@8 -- # VM_1_qemu_numa_node=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@10 -- # VM_2_qemu_mask=5-6 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@11 -- # VM_2_qemu_numa_node=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@13 -- # VM_3_qemu_mask=7-8 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@14 -- # VM_3_qemu_numa_node=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@16 -- # VM_4_qemu_mask=9-10 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@17 -- # VM_4_qemu_numa_node=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@19 -- # VM_5_qemu_mask=11-12 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@20 -- # VM_5_qemu_numa_node=0 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@22 -- # VM_6_qemu_mask=13-14 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@23 -- # VM_6_qemu_numa_node=1 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@25 -- # VM_7_qemu_mask=15-16 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@26 -- # VM_7_qemu_numa_node=1 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@28 -- # VM_8_qemu_mask=17-18 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@29 -- # VM_8_qemu_numa_node=1 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@31 -- # VM_9_qemu_mask=19-20 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@32 -- # VM_9_qemu_numa_node=1 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@34 -- # VM_10_qemu_mask=21-22 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@35 -- # VM_10_qemu_numa_node=1 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@37 -- # VM_11_qemu_mask=23-24 00:15:12.543 00:22:42 sma.sma_vhost -- common/autotest.config@38 -- # VM_11_qemu_numa_node=1 00:15:12.543 00:22:42 sma.sma_vhost -- vhost/common.sh@34 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/common.sh 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/event/scheduler/scheduler 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/scheduler/cgroups.sh 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/cgroups.sh@243 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:15:12.543 00:22:42 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # check_cgroup 00:15:12.543 00:22:43 sma.sma_vhost -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:15:12.543 00:22:43 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:15:12.543 00:22:43 sma.sma_vhost -- scheduler/cgroups.sh@10 -- # echo 2 00:15:12.543 00:22:43 sma.sma_vhost -- scheduler/cgroups.sh@244 -- # cgroup_version=2 00:15:12.543 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:15:12.543 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.543 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@49 -- # vm_no=0 00:15:12.543 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@50 -- # bus_size=32 00:15:12.543 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@52 -- # timing_enter setup_vm 00:15:12.543 00:22:43 sma.sma_vhost -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.543 00:22:43 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:12.543 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@54 -- # vm_setup --force=0 --disk-type=virtio '--qemu-args=-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' --os=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@511 -- # xtrace_disable 00:15:12.543 00:22:43 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:12.543 INFO: Creating new VM in /root/vhost_test/vms/0 00:15:12.543 INFO: No '--os-mode' parameter provided - using 'snapshot' 00:15:12.543 INFO: TASK MASK: 1-2 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@664 -- # local node_num=0 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@665 -- # local boot_disk_present=false 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@666 -- # notice 'NUMA NODE: 0' 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'NUMA NODE: 0' 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@61 -- # false 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out= 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@70 -- # shift 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: NUMA NODE: 0' 00:15:12.543 INFO: NUMA NODE: 0 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@667 -- # cmd+=(-m "$guest_memory" --enable-kvm -cpu host -smp "$cpu_num" -vga std -vnc ":$vnc_socket" -daemonize) 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@668 -- # cmd+=(-object "memory-backend-file,id=mem,size=${guest_memory}M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=$node_num,policy=bind") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@669 -- # [[ snapshot == snapshot ]] 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@669 -- # cmd+=(-snapshot) 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@670 -- # [[ -n '' ]] 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@671 -- # cmd+=(-monitor "telnet:127.0.0.1:$monitor_port,server,nowait") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@672 -- # cmd+=(-numa "node,memdev=mem") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@673 -- # cmd+=(-pidfile "$qemu_pid_file") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@674 -- # cmd+=(-serial "file:$vm_dir/serial.log") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@675 -- # cmd+=(-D "$vm_dir/qemu.log") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@676 -- # cmd+=(-chardev "file,path=$vm_dir/seabios.log,id=seabios" -device "isa-debugcon,iobase=0x402,chardev=seabios") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@677 -- # cmd+=(-net "user,hostfwd=tcp::$ssh_socket-:22,hostfwd=tcp::$fio_socket-:8765") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@678 -- # cmd+=(-net nic) 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@679 -- # [[ -z '' ]] 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@680 -- # cmd+=(-drive "file=$os,if=none,id=os_disk") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@681 -- # cmd+=(-device "ide-hd,drive=os_disk,bootindex=0") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@684 -- # (( 0 == 0 )) 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@684 -- # [[ virtio == virtio* ]] 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@685 -- # disks=("default_virtio.img") 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@691 -- # for disk in "${disks[@]}" 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@694 -- # IFS=, 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@694 -- # read -r disk disk_type _ 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@695 -- # [[ -z '' ]] 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@695 -- # disk_type=virtio 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@697 -- # case $disk_type in 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@699 -- # local raw_name=RAWSCSI 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@700 -- # local raw_disk=/root/vhost_test/vms/0/test.img 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@703 -- # [[ -f default_virtio.img ]] 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@707 -- # notice 'Creating Virtio disc /root/vhost_test/vms/0/test.img' 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Creating Virtio disc /root/vhost_test/vms/0/test.img' 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@61 -- # false 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out= 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@70 -- # shift 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img' 00:15:12.543 INFO: Creating Virtio disc /root/vhost_test/vms/0/test.img 00:15:12.543 00:22:43 sma.sma_vhost -- vhost/common.sh@708 -- # dd if=/dev/zero of=/root/vhost_test/vms/0/test.img bs=1024k count=1024 00:15:13.126 1024+0 records in 00:15:13.126 1024+0 records out 00:15:13.126 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.404576 s, 2.7 GB/s 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@711 -- # cmd+=(-device "virtio-scsi-pci,num_queues=$queue_number") 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@712 -- # cmd+=(-device "scsi-hd,drive=hd$i,vendor=$raw_name") 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@713 -- # cmd+=(-drive "if=none,id=hd$i,file=$raw_disk,format=raw$raw_cache") 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@773 -- # [[ -n '' ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@778 -- # (( 1 )) 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@778 -- # cmd+=("${qemu_args[@]}") 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@779 -- # notice 'Saving to /root/vhost_test/vms/0/run.sh' 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Saving to /root/vhost_test/vms/0/run.sh' 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@61 -- # false 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out= 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@70 -- # shift 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Saving to /root/vhost_test/vms/0/run.sh' 00:15:13.126 INFO: Saving to /root/vhost_test/vms/0/run.sh 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@780 -- # cat 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@780 -- # printf '%s\n' taskset -a -c 1-2 /usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 -m 1024 --enable-kvm -cpu host -smp 2 -vga std -vnc :100 -daemonize -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on,prealloc=yes,host-nodes=0,policy=bind -snapshot -monitor telnet:127.0.0.1:10002,server,nowait -numa node,memdev=mem -pidfile /root/vhost_test/vms/0/qemu.pid -serial file:/root/vhost_test/vms/0/serial.log -D /root/vhost_test/vms/0/qemu.log -chardev file,path=/root/vhost_test/vms/0/seabios.log,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios -net user,hostfwd=tcp::10000-:22,hostfwd=tcp::10001-:8765 -net nic -drive file=/var/spdk/dependencies/vhost/spdk_test_image.qcow2,if=none,id=os_disk -device ide-hd,drive=os_disk,bootindex=0 -device virtio-scsi-pci,num_queues=2 -device scsi-hd,drive=hd,vendor=RAWSCSI -drive if=none,id=hd,file=/root/vhost_test/vms/0/test.img,format=raw '-qmp tcp:localhost:9090,server,nowait -device pci-bridge,chassis_nr=1,id=pci.spdk.0 -device pci-bridge,chassis_nr=2,id=pci.spdk.1' 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@817 -- # chmod +x /root/vhost_test/vms/0/run.sh 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@820 -- # echo 10000 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@821 -- # echo 10001 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@822 -- # echo 10002 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@824 -- # rm -f /root/vhost_test/vms/0/migration_port 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@825 -- # [[ -z '' ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@827 -- # echo 10004 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@828 -- # echo 100 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@830 -- # [[ -z '' ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@831 -- # [[ -z '' ]] 00:15:13.126 00:22:43 sma.sma_vhost -- sma/vhost_blk.sh@59 -- # vm_run 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@835 -- # local OPTIND optchar vm 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@836 -- # local run_all=false 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@837 -- # local vms_to_run= 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@839 -- # getopts a-: optchar 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@849 -- # false 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@852 -- # shift 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@853 -- # for vm in "$@" 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@854 -- # vm_num_is_valid 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@855 -- # [[ ! -x /root/vhost_test/vms/0/run.sh ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@859 -- # vms_to_run+=' 0' 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@863 -- # for vm in $vms_to_run 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@864 -- # vm_is_running 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@362 -- # vm_num_is_valid 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@363 -- # local vm_dir=/root/vhost_test/vms/0 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@365 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@366 -- # return 1 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@869 -- # notice 'running /root/vhost_test/vms/0/run.sh' 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'running /root/vhost_test/vms/0/run.sh' 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@61 -- # false 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out= 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@70 -- # shift 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: running /root/vhost_test/vms/0/run.sh' 00:15:13.126 INFO: running /root/vhost_test/vms/0/run.sh 00:15:13.126 00:22:43 sma.sma_vhost -- vhost/common.sh@870 -- # /root/vhost_test/vms/0/run.sh 00:15:13.126 Running VM in /root/vhost_test/vms/0 00:15:13.385 Waiting for QEMU pid file 00:15:14.318 === qemu.log === 00:15:14.318 === qemu.log === 00:15:14.318 00:22:44 sma.sma_vhost -- sma/vhost_blk.sh@60 -- # vm_wait_for_boot 300 0 00:15:14.318 00:22:44 sma.sma_vhost -- vhost/common.sh@906 -- # assert_number 300 00:15:14.318 00:22:44 sma.sma_vhost -- vhost/common.sh@274 -- # [[ 300 =~ [0-9]+ ]] 00:15:14.318 00:22:44 sma.sma_vhost -- vhost/common.sh@274 -- # return 0 00:15:14.318 00:22:44 sma.sma_vhost -- vhost/common.sh@908 -- # xtrace_disable 00:15:14.318 00:22:44 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:14.318 INFO: Waiting for VMs to boot 00:15:14.318 INFO: waiting for VM0 (/root/vhost_test/vms/0) 00:15:36.239 00:15:36.239 INFO: VM0 ready 00:15:36.239 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:36.239 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:36.239 INFO: all VMs ready 00:15:36.239 00:23:06 sma.sma_vhost -- vhost/common.sh@966 -- # return 0 00:15:36.239 00:23:06 sma.sma_vhost -- sma/vhost_blk.sh@61 -- # timing_exit setup_vm 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:36.239 00:23:06 sma.sma_vhost -- sma/vhost_blk.sh@64 -- # vhostpid=2118897 00:15:36.239 00:23:06 sma.sma_vhost -- sma/vhost_blk.sh@66 -- # waitforlisten 2118897 00:15:36.239 00:23:06 sma.sma_vhost -- sma/vhost_blk.sh@63 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/vhost -S /var/tmp -m 0x3 --wait-for-rpc 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@831 -- # '[' -z 2118897 ']' 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.239 00:23:06 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:36.239 [2024-10-09 00:23:06.676334] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:15:36.239 [2024-10-09 00:23:06.676422] [ DPDK EAL parameters: vhost --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118897 ] 00:15:36.239 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.239 [2024-10-09 00:23:06.782985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:36.498 [2024-10-09 00:23:06.988713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.498 [2024-10-09 00:23:06.988722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@864 -- # return 0 00:15:37.064 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@69 -- # rpc_cmd dpdk_cryptodev_scan_accel_module 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.064 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@70 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 [2024-10-09 00:23:07.482675] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.064 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@71 -- # rpc_cmd accel_assign_opc -o encrypt -m dpdk_cryptodev 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 [2024-10-09 00:23:07.490690] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.064 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@72 -- # rpc_cmd accel_assign_opc -o decrypt -m dpdk_cryptodev 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:37.064 [2024-10-09 00:23:07.498711] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.064 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@73 -- # rpc_cmd framework_start_init 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.064 00:23:07 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:37.322 [2024-10-09 00:23:07.704053] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1 00:15:37.322 00:23:07 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.322 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@93 -- # smapid=2119126 00:15:37.322 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@96 -- # sma_waitforlisten 00:15:37.322 00:23:07 sma.sma_vhost -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:15:37.322 00:23:07 sma.sma_vhost -- sma/common.sh@8 -- # local sma_port=8080 00:15:37.322 00:23:07 sma.sma_vhost -- sma/common.sh@10 -- # (( i = 0 )) 00:15:37.322 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:15:37.322 00:23:07 sma.sma_vhost -- sma/vhost_blk.sh@75 -- # cat 00:15:37.322 00:23:07 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 )) 00:15:37.322 00:23:07 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:15:37.322 00:23:07 sma.sma_vhost -- sma/common.sh@14 -- # sleep 1s 00:15:37.580 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:37.580 I0000 00:00:1728426188.071663 2119126 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:38.511 00:23:08 sma.sma_vhost -- sma/common.sh@10 -- # (( i++ )) 00:15:38.511 00:23:08 sma.sma_vhost -- sma/common.sh@10 -- # (( i < 5 )) 00:15:38.511 00:23:08 sma.sma_vhost -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:15:38.512 00:23:08 sma.sma_vhost -- sma/common.sh@12 -- # return 0 00:15:38.512 00:23:08 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l' 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@331 -- # local vm_num=0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@332 -- # shift 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:15:38.512 00:23:08 sma.sma_vhost -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l' 00:15:38.512 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:38.512 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@99 -- # [[ 0 -eq 0 ]] 00:15:38.512 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@102 -- # rpc_cmd bdev_null_create null0 100 4096 00:15:38.512 00:23:09 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.512 00:23:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:38.512 null0 00:15:38.512 00:23:09 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.512 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@103 -- # rpc_cmd bdev_null_create null1 100 4096 00:15:38.512 00:23:09 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.512 00:23:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:38.769 null1 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # rpc_cmd bdev_get_bdevs -b null0 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # jq -r '.[].uuid' 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@104 -- # uuid=0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # rpc_cmd bdev_get_bdevs -b null1 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # jq -r '.[].uuid' 00:15:38.769 00:23:09 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@105 -- # uuid2=76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # create_device 0 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # jq -r .handle 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:38.769 00:23:09 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:38.769 00:23:09 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:39.027 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:39.027 I0000 00:00:1728426189.461976 2119395 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:39.027 I0000 00:00:1728426189.463452 2119395 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:39.027 I0000 00:00:1728426189.465182 2119400 subchannel.cc:806] subchannel 0x565149001220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x565148f12670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x565148fa0cc0, grpc.internal.client_channel_call_destination=0x7fd40f5ae390, grpc.internal.event_engine=0x565148eb0190, grpc.internal.security_connector=0x565148eba6e0, grpc.internal.subchannel_pool=0x565149028cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x565148df75c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:09.464156512+02:00"}), backing off for 999 ms 00:15:39.027 VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252 00:15:39.027 VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 83 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:256 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:257 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG 00:15:39.961 00:23:10 sma.sma_vhost -- sma/vhost_blk.sh@108 -- # devid0=virtio_blk:sma-0 00:15:39.961 00:23:10 sma.sma_vhost -- sma/vhost_blk.sh@109 -- # rpc_cmd vhost_get_controllers -n sma-0 00:15:39.961 00:23:10 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.961 00:23:10 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:39.961 [ 00:15:39.961 { 00:15:39.961 "ctrlr": "sma-0", 00:15:39.961 "cpumask": "0x3", 00:15:39.961 "delay_base_us": 0, 00:15:39.961 "iops_threshold": 60000, 00:15:39.961 "socket": "/var/tmp/sma-0", 00:15:39.961 "sessions": [ 00:15:39.961 { 00:15:39.961 "vid": 0, 00:15:39.961 "id": 0, 00:15:39.961 "name": "sma-0s0", 00:15:39.961 "started": false, 00:15:39.961 "max_queues": 0, 00:15:39.961 "inflight_task_cnt": 0 00:15:39.961 } 00:15:39.961 ], 00:15:39.961 "backend_specific": { 00:15:39.961 "block": { 00:15:39.961 "readonly": false, 00:15:39.961 "bdev": "null0", 00:15:39.961 "transport": "vhost_user_blk" 00:15:39.961 } 00:15:39.961 } 00:15:39.961 } 00:15:39.961 ] 00:15:39.961 00:23:10 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.961 00:23:10 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # create_device 1 76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:39.961 00:23:10 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # jq -r .handle 00:15:39.961 00:23:10 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:39.961 00:23:10 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:39.961 00:23:10 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008): 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 82 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 258 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:82 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:256 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) guest physical addr: 0x0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) guest virtual addr: 0x7f3337e00000 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) host virtual addr: 0x7f6f15400000 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) mmap addr : 0x7f6f15400000 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) mmap size : 0x40000000 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) mmap align: 0x200000 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) mmap off : 0x0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0. 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:259 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0. 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:260 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f): 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 1 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 1 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 1 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:39.961 VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing. 00:15:40.220 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:40.220 I0000 00:00:1728426190.787681 2119650 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:40.220 I0000 00:00:1728426190.789196 2119650 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:40.220 I0000 00:00:1728426190.790912 2119653 subchannel.cc:806] subchannel 0x56035870d220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56035861e670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5603586accc0, grpc.internal.client_channel_call_destination=0x7f2959a1d390, grpc.internal.event_engine=0x5603585bc190, grpc.internal.security_connector=0x5603585c66e0, grpc.internal.subchannel_pool=0x560358734cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5603585035c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:10.789898144+02:00"}), backing off for 1000 ms 00:15:40.220 VHOST_CONFIG: (/var/tmp/sma-1) vhost-user server: socket created, fd: 263 00:15:40.220 VHOST_CONFIG: (/var/tmp/sma-1) binding succeeded 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) new vhost user connection is 261 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) new device, handle is 1 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_PROTOCOL_FEATURES 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_PROTOCOL_FEATURES 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) negotiated Vhost-user protocol features: 0x11ebf 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_QUEUE_NUM 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_BACKEND_REQ_FD 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_OWNER 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_FEATURES 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:265 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:266 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ERR 00:15:41.157 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_CONFIG 00:15:41.157 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@111 -- # devid1=virtio_blk:sma-1 00:15:41.157 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@112 -- # rpc_cmd vhost_get_controllers -n sma-0 00:15:41.157 00:23:11 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.157 00:23:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:41.157 [ 00:15:41.157 { 00:15:41.157 "ctrlr": "sma-0", 00:15:41.157 "cpumask": "0x3", 00:15:41.157 "delay_base_us": 0, 00:15:41.158 "iops_threshold": 60000, 00:15:41.158 "socket": "/var/tmp/sma-0", 00:15:41.158 "sessions": [ 00:15:41.158 { 00:15:41.158 "vid": 0, 00:15:41.158 "id": 0, 00:15:41.158 "name": "sma-0s0", 00:15:41.158 "started": true, 00:15:41.158 "max_queues": 2, 00:15:41.158 "inflight_task_cnt": 0 00:15:41.158 } 00:15:41.158 ], 00:15:41.158 "backend_specific": { 00:15:41.158 "block": { 00:15:41.158 "readonly": false, 00:15:41.158 "bdev": "null0", 00:15:41.158 "transport": "vhost_user_blk" 00:15:41.158 } 00:15:41.158 } 00:15:41.158 } 00:15:41.158 ] 00:15:41.158 00:23:11 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.158 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@113 -- # rpc_cmd vhost_get_controllers -n sma-1 00:15:41.158 00:23:11 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.158 00:23:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:41.158 [ 00:15:41.158 { 00:15:41.158 "ctrlr": "sma-1", 00:15:41.158 "cpumask": "0x3", 00:15:41.158 "delay_base_us": 0, 00:15:41.158 "iops_threshold": 60000, 00:15:41.158 "socket": "/var/tmp/sma-1", 00:15:41.158 "sessions": [ 00:15:41.158 { 00:15:41.158 "vid": 1, 00:15:41.158 "id": 0, 00:15:41.158 "name": "sma-1s1", 00:15:41.158 "started": false, 00:15:41.158 "max_queues": 0, 00:15:41.158 "inflight_task_cnt": 0 00:15:41.158 } 00:15:41.158 ], 00:15:41.158 "backend_specific": { 00:15:41.158 "block": { 00:15:41.158 "readonly": false, 00:15:41.158 "bdev": "null1", 00:15:41.158 "transport": "vhost_user_blk" 00:15:41.158 } 00:15:41.158 } 00:15:41.158 } 00:15:41.158 ] 00:15:41.158 00:23:11 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.158 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@114 -- # [[ virtio_blk:sma-0 != \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]] 00:15:41.158 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000008): 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -RESET: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -ACKNOWLEDGE: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -DRIVER: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -FEATURES_OK: 1 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -DRIVER_OK: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -DEVICE_NEED_RESET: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) -FAILED: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_INFLIGHT_FD 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd num_queues: 2 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) get_inflight_fd queue_size: 128 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_size: 4224 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) send inflight mmap_offset: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) send inflight fd: 262 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_INFLIGHT_FD 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_size: 4224 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd mmap_offset: 0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd num_queues: 2 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd queue_size: 128 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd fd: 267 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) set_inflight_fd pervq_inflight_size: 2112 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:0 file:262 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_CALL 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) vring call idx:1 file:265 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_FEATURES 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) negotiated Virtio features: 0x150005446 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_MEM_TABLE 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) guest memory region size: 0x40000000 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) guest physical addr: 0x0 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) guest virtual addr: 0x7f3337e00000 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) host virtual addr: 0x7f6ed5400000 00:15:41.159 VHOST_CONFIG: (/var/tmp/sma-1) mmap addr : 0x7f6ed5400000 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) mmap size : 0x40000000 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) mmap align: 0x200000 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) mmap off : 0x0 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 last_used_idx:0 last_avail_idx:0. 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:0 file:268 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_NUM 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # rpc_cmd vhost_get_controllers 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # jq -r '. | length' 00:15:41.160 00:23:11 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.160 00:23:11 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_BASE 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 last_used_idx:0 last_avail_idx:0. 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ADDR 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_KICK 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) vring kick idx:1 file:269 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 0 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 1 to qp idx: 1 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_STATUS 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x0000000f): 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -RESET: 0 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -ACKNOWLEDGE: 1 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -DRIVER: 1 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -FEATURES_OK: 1 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -DRIVER_OK: 1 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -DEVICE_NEED_RESET: 0 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) -FAILED: 0 00:15:41.160 VHOST_CONFIG: (/var/tmp/sma-1) virtio is now ready for processing. 00:15:41.160 00:23:11 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@117 -- # [[ 2 -eq 2 ]] 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # create_device 0 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # jq -r .handle 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:41.160 00:23:11 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:41.161 00:23:11 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:41.423 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:41.423 I0000 00:00:1728426191.913937 2119901 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:41.423 I0000 00:00:1728426191.915462 2119901 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:41.423 I0000 00:00:1728426191.917188 2119904 subchannel.cc:806] subchannel 0x559282624220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559282535670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5592825c3cc0, grpc.internal.client_channel_call_destination=0x7fe81a610390, grpc.internal.event_engine=0x5592824d3190, grpc.internal.security_connector=0x5592824dd6e0, grpc.internal.subchannel_pool=0x55928264bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55928241a5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:11.916168475+02:00"}), backing off for 1000 ms 00:15:41.423 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@121 -- # tmp0=virtio_blk:sma-0 00:15:41.423 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # create_device 1 76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:41.423 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # jq -r .handle 00:15:41.423 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:41.423 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:41.423 00:23:12 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:41.681 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:41.681 I0000 00:00:1728426192.229885 2119927 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:41.681 I0000 00:00:1728426192.231723 2119927 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:41.681 I0000 00:00:1728426192.233530 2119933 subchannel.cc:806] subchannel 0x55bc5faf7220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55bc5fa08670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55bc5fa96cc0, grpc.internal.client_channel_call_destination=0x7f6cc8ac1390, grpc.internal.event_engine=0x55bc5f9a6190, grpc.internal.security_connector=0x55bc5f9b06e0, grpc.internal.subchannel_pool=0x55bc5fb1ecc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55bc5f8ed5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:12.232514879+02:00"}), backing off for 1000 ms 00:15:41.939 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@122 -- # tmp1=virtio_blk:sma-1 00:15:41.939 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # NOT create_device 1 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:41.939 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@125 -- # jq -r .handle 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@650 -- # local es=0 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@652 -- # valid_exec_arg create_device 1 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@638 -- # local arg=create_device 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@642 -- # type -t create_device 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:41.939 00:23:12 sma.sma_vhost -- common/autotest_common.sh@653 -- # create_device 1 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:41.939 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:41.939 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:41.939 00:23:12 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:41.939 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:41.939 I0000 00:00:1728426192.553701 2119962 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:41.939 I0000 00:00:1728426192.555040 2119962 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:41.939 I0000 00:00:1728426192.556820 2120009 subchannel.cc:806] subchannel 0x55d0caec8220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55d0cadd9670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55d0cae67cc0, grpc.internal.client_channel_call_destination=0x7f5135705390, grpc.internal.event_engine=0x55d0cad77190, grpc.internal.security_connector=0x55d0cad816e0, grpc.internal.subchannel_pool=0x55d0caeefcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55d0cacbe5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:12.555805899+02:00"}), backing off for 1000 ms 00:15:42.197 Traceback (most recent call last): 00:15:42.197 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:15:42.197 main(sys.argv[1:]) 00:15:42.197 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:15:42.198 result = client.call(request['method'], request.get('params', {})) 00:15:42.198 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:42.198 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:15:42.198 response = func(request=json_format.ParseDict(params, input())) 00:15:42.198 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:42.198 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:15:42.198 return _end_unary_response_blocking(state, call, False, None) 00:15:42.198 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:42.198 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:15:42.198 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:15:42.198 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:42.198 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:15:42.198 status = StatusCode.INTERNAL 00:15:42.198 details = "Failed to create vhost device" 00:15:42.198 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:12.605995204+02:00", grpc_status:13, grpc_message:"Failed to create vhost device"}" 00:15:42.198 > 00:15:42.198 00:23:12 sma.sma_vhost -- common/autotest_common.sh@653 -- # es=1 00:15:42.198 00:23:12 sma.sma_vhost -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.198 00:23:12 sma.sma_vhost -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.198 00:23:12 sma.sma_vhost -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.198 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l' 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@331 -- # local vm_num=0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@332 -- # shift 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:15:42.198 00:23:12 sma.sma_vhost -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l' 00:15:42.198 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@128 -- # [[ 2 -eq 2 ]] 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # rpc_cmd vhost_get_controllers 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # jq -r '. | length' 00:15:42.457 00:23:12 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.457 00:23:12 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:42.457 00:23:12 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@130 -- # [[ 2 -eq 2 ]] 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@131 -- # [[ virtio_blk:sma-0 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\0 ]] 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@132 -- # [[ virtio_blk:sma-1 == \v\i\r\t\i\o\_\b\l\k\:\s\m\a\-\1 ]] 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@135 -- # delete_device virtio_blk:sma-0 00:15:42.457 00:23:12 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:42.457 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:42.457 I0000 00:00:1728426193.049637 2120202 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:42.457 I0000 00:00:1728426193.050952 2120202 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:42.457 I0000 00:00:1728426193.052615 2120203 subchannel.cc:806] subchannel 0x55747a3c7220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55747a2d8670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55747a366cc0, grpc.internal.client_channel_call_destination=0x7fa0c3cfb390, grpc.internal.event_engine=0x55747a276190, grpc.internal.security_connector=0x55747a2806e0, grpc.internal.subchannel_pool=0x55747a3eecc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55747a1bd5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:13.051602354+02:00"}), backing off for 1000 ms 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000): 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 1 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:1 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:42.457 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:49 00:15:42.716 VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed 00:15:42.716 {} 00:15:42.716 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@136 -- # NOT rpc_cmd vhost_get_controllers -n sma-0 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@650 -- # local es=0 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-0 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@653 -- # rpc_cmd vhost_get_controllers -n sma-0 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:42.716 request: 00:15:42.716 { 00:15:42.716 "name": "sma-0", 00:15:42.716 "method": "vhost_get_controllers", 00:15:42.716 "req_id": 1 00:15:42.716 } 00:15:42.716 Got JSON-RPC error response 00:15:42.716 response: 00:15:42.716 { 00:15:42.716 "code": -32603, 00:15:42.716 "message": "No such device" 00:15:42.716 } 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@653 -- # es=1 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:42.716 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # rpc_cmd vhost_get_controllers 00:15:42.716 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # jq -r '. | length' 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:42.716 00:23:13 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.716 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@137 -- # [[ 1 -eq 1 ]] 00:15:42.716 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@139 -- # delete_device virtio_blk:sma-1 00:15:42.716 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:42.975 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:42.975 I0000 00:00:1728426193.444772 2120233 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:42.975 I0000 00:00:1728426193.446230 2120233 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:42.975 I0000 00:00:1728426193.447878 2120243 subchannel.cc:806] subchannel 0x56048d115220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56048d026670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56048d0b4cc0, grpc.internal.client_channel_call_destination=0x7feb3b860390, grpc.internal.event_engine=0x56048cfc4190, grpc.internal.security_connector=0x56048cfce6e0, grpc.internal.subchannel_pool=0x56048d13ccc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56048cf0b5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:13.446869959+02:00"}), backing off for 1000 ms 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_STATUS 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) new device status(0x00000000): 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -RESET: 1 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -ACKNOWLEDGE: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -DRIVER: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -FEATURES_OK: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -DRIVER_OK: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -DEVICE_NEED_RESET: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) -FAILED: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_SET_VRING_ENABLE 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) set queue enable: 0 to qp idx: 1 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:0 file:0 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) read message VHOST_USER_GET_VRING_BASE 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) vring base idx:1 file:50 00:15:42.975 VHOST_CONFIG: (/var/tmp/sma-1) vhost peer closed 00:15:42.975 {} 00:15:42.975 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@140 -- # NOT rpc_cmd vhost_get_controllers -n sma-1 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@650 -- # local es=0 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd vhost_get_controllers -n sma-1 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@653 -- # rpc_cmd vhost_get_controllers -n sma-1 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.975 00:23:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:43.234 request: 00:15:43.234 { 00:15:43.234 "name": "sma-1", 00:15:43.234 "method": "vhost_get_controllers", 00:15:43.234 "req_id": 1 00:15:43.234 } 00:15:43.234 Got JSON-RPC error response 00:15:43.234 response: 00:15:43.234 { 00:15:43.234 "code": -32603, 00:15:43.234 "message": "No such device" 00:15:43.234 } 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@653 -- # es=1 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.234 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # rpc_cmd vhost_get_controllers 00:15:43.234 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # jq -r '. | length' 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:43.234 00:23:13 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.234 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@141 -- # [[ 0 -eq 0 ]] 00:15:43.234 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@144 -- # delete_device virtio_blk:sma-0 00:15:43.234 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:43.234 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:43.234 I0000 00:00:1728426193.839639 2120267 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:43.234 I0000 00:00:1728426193.841281 2120267 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:43.234 I0000 00:00:1728426193.842991 2120272 subchannel.cc:806] subchannel 0x55b2b4f6b220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b2b4e7c670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b2b4f0acc0, grpc.internal.client_channel_call_destination=0x7f726a7c8390, grpc.internal.event_engine=0x55b2b4e1a190, grpc.internal.security_connector=0x55b2b4e246e0, grpc.internal.subchannel_pool=0x55b2b4f92cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b2b4d615c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:13.841969034+02:00"}), backing off for 1000 ms 00:15:43.234 {} 00:15:43.492 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@145 -- # delete_device virtio_blk:sma-1 00:15:43.492 00:23:13 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:43.492 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:43.492 I0000 00:00:1728426194.055611 2120301 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:43.492 I0000 00:00:1728426194.056918 2120301 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:43.492 I0000 00:00:1728426194.058605 2120402 subchannel.cc:806] subchannel 0x562466a80220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562466991670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562466a1fcc0, grpc.internal.client_channel_call_destination=0x7fa67d748390, grpc.internal.event_engine=0x56246692f190, grpc.internal.security_connector=0x5624669396e0, grpc.internal.subchannel_pool=0x562466aa7cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5624668765c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:14.057596521+02:00"}), backing off for 1000 ms 00:15:43.492 {} 00:15:43.492 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l' 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@331 -- # local vm_num=0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@332 -- # shift 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:15:43.492 00:23:14 sma.sma_vhost -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l' 00:15:43.750 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@148 -- # [[ 0 -eq 0 ]] 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@150 -- # devids=() 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # rpc_cmd bdev_get_bdevs -b null0 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # jq -r '.[].uuid' 00:15:43.750 00:23:14 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.750 00:23:14 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:43.750 00:23:14 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@153 -- # uuid=0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # create_device 0 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # jq -r .handle 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:43.750 00:23:14 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:43.750 00:23:14 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:44.007 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:44.007 I0000 00:00:1728426194.578291 2120532 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:44.007 I0000 00:00:1728426194.579850 2120532 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:44.007 I0000 00:00:1728426194.581604 2120541 subchannel.cc:806] subchannel 0x560c019bc220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560c018cd670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x560c0195bcc0, grpc.internal.client_channel_call_destination=0x7fdcd4c06390, grpc.internal.event_engine=0x560c0186b190, grpc.internal.security_connector=0x560c018756e0, grpc.internal.subchannel_pool=0x560c019e3cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x560c017b25c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:14.580584092+02:00"}), backing off for 1000 ms 00:15:44.007 VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 252 00:15:44.007 VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 83 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:256 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:257 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:44.941 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008): 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 84 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 258 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:84 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:256 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) guest physical addr: 0x0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) guest virtual addr: 0x7f3337e00000 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) host virtual addr: 0x7f6f15400000 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) mmap addr : 0x7f6f15400000 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) mmap size : 0x40000000 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) mmap align: 0x200000 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) mmap off : 0x0 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0. 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:259 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0. 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:260 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:45.199 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f): 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 1 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 1 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 1 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:45.200 VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing. 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@154 -- # devids[0]=virtio_blk:sma-0 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # rpc_cmd bdev_get_bdevs -b null1 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # jq -r '.[].uuid' 00:15:45.200 00:23:15 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.200 00:23:15 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:45.200 00:23:15 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@155 -- # uuid=76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # create_device 32 76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # jq -r .handle 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:45.200 00:23:15 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 76425f2d-0443-4100-9bfa-e74e69fa648b 00:15:45.200 00:23:15 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:45.456 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:45.456 I0000 00:00:1728426195.855481 2120791 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:45.456 I0000 00:00:1728426195.856949 2120791 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:45.456 I0000 00:00:1728426195.858685 2120794 subchannel.cc:806] subchannel 0x5626836e0220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5626835f1670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56268367fcc0, grpc.internal.client_channel_call_destination=0x7f69961c4390, grpc.internal.event_engine=0x56268358f190, grpc.internal.security_connector=0x5626835996e0, grpc.internal.subchannel_pool=0x562683707cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5626834d65c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:15.857662639+02:00"}), backing off for 1000 ms 00:15:45.456 VHOST_CONFIG: (/var/tmp/sma-32) vhost-user server: socket created, fd: 263 00:15:45.456 VHOST_CONFIG: (/var/tmp/sma-32) binding succeeded 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) new vhost user connection is 82 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) new device, handle is 1 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_PROTOCOL_FEATURES 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_PROTOCOL_FEATURES 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) negotiated Vhost-user protocol features: 0x11ebf 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_QUEUE_NUM 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_BACKEND_REQ_FD 00:15:46.019 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_OWNER 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_FEATURES 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:265 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:266 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ERR 00:15:46.020 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_CONFIG 00:15:46.020 00:23:16 sma.sma_vhost -- sma/vhost_blk.sh@156 -- # devids[1]=virtio_blk:sma-32 00:15:46.020 00:23:16 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l' 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@331 -- # local vm_num=0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@332 -- # shift 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:15:46.020 00:23:16 sma.sma_vhost -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l' 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000008): 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -RESET: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -ACKNOWLEDGE: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -DRIVER: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -FEATURES_OK: 1 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -DRIVER_OK: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -DEVICE_NEED_RESET: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -FAILED: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_INFLIGHT_FD 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd num_queues: 2 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) get_inflight_fd queue_size: 128 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_size: 4224 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) send inflight mmap_offset: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) send inflight fd: 262 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_INFLIGHT_FD 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_size: 4224 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd mmap_offset: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd num_queues: 2 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd queue_size: 128 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd fd: 267 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set_inflight_fd pervq_inflight_size: 2112 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:0 file:262 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_CALL 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) vring call idx:1 file:265 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_FEATURES 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) negotiated Virtio features: 0x150005446 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_MEM_TABLE 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) guest memory region size: 0x40000000 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) guest physical addr: 0x0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) guest virtual addr: 0x7f3337e00000 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) host virtual addr: 0x7f6ed5400000 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) mmap addr : 0x7f6ed5400000 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) mmap size : 0x40000000 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) mmap align: 0x200000 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) mmap off : 0x0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 last_used_idx:0 last_avail_idx:0. 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:0 file:268 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_NUM 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_BASE 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 last_used_idx:0 last_avail_idx:0. 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ADDR 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_KICK 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) vring kick idx:1 file:269 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 1 to qp idx: 1 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_STATUS 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x0000000f): 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -RESET: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -ACKNOWLEDGE: 1 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -DRIVER: 1 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -FEATURES_OK: 1 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -DRIVER_OK: 1 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -DEVICE_NEED_RESET: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) -FAILED: 0 00:15:46.278 VHOST_CONFIG: (/var/tmp/sma-32) virtio is now ready for processing. 00:15:46.278 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:46.278 00:23:16 sma.sma_vhost -- sma/vhost_blk.sh@158 -- # [[ 2 -eq 2 ]] 00:15:46.278 00:23:16 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}" 00:15:46.278 00:23:16 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-0 00:15:46.278 00:23:16 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:46.536 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:46.536 I0000 00:00:1728426197.085960 2120993 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:46.536 I0000 00:00:1728426197.087473 2120993 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:46.536 I0000 00:00:1728426197.089125 2121044 subchannel.cc:806] subchannel 0x56496e769220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56496e67a670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56496e708cc0, grpc.internal.client_channel_call_destination=0x7f0f6563f390, grpc.internal.event_engine=0x56496e618190, grpc.internal.security_connector=0x56496e6226e0, grpc.internal.subchannel_pool=0x56496e790cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56496e55f5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:17.088094738+02:00"}), backing off for 999 ms 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000): 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 1 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:0 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:50 00:15:47.470 VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed 00:15:47.470 {} 00:15:47.470 00:23:17 sma.sma_vhost -- sma/vhost_blk.sh@161 -- # for id in "${devids[@]}" 00:15:47.470 00:23:17 sma.sma_vhost -- sma/vhost_blk.sh@162 -- # delete_device virtio_blk:sma-32 00:15:47.470 00:23:17 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:47.734 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:47.734 I0000 00:00:1728426198.146937 2121074 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:47.734 I0000 00:00:1728426198.148526 2121074 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:47.734 I0000 00:00:1728426198.150245 2121155 subchannel.cc:806] subchannel 0x556641274220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x556641185670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x556641213cc0, grpc.internal.client_channel_call_destination=0x7f088cbdd390, grpc.internal.event_engine=0x556641123190, grpc.internal.security_connector=0x55664112d6e0, grpc.internal.subchannel_pool=0x55664129bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55664106a5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:18.149237477+02:00"}), backing off for 1000 ms 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_STATUS 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) new device status(0x00000000): 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -RESET: 1 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -ACKNOWLEDGE: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -DRIVER: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -FEATURES_OK: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -DRIVER_OK: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -DEVICE_NEED_RESET: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) -FAILED: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 0 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_SET_VRING_ENABLE 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) set queue enable: 0 to qp idx: 1 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:0 file:6 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) read message VHOST_USER_GET_VRING_BASE 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) vring base idx:1 file:44 00:15:47.734 VHOST_CONFIG: (/var/tmp/sma-32) vhost peer closed 00:15:47.734 {} 00:15:47.734 00:23:18 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # vm_exec 0 'lsblk | grep -E "^vd." | wc -l' 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@329 -- # vm_num_is_valid 0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@331 -- # local vm_num=0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@332 -- # shift 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@334 -- # vm_ssh_socket 0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@312 -- # vm_num_is_valid 0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@313 -- # local vm_dir=/root/vhost_test/vms/0 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@315 -- # cat /root/vhost_test/vms/0/ssh_socket 00:15:47.734 00:23:18 sma.sma_vhost -- vhost/common.sh@334 -- # sshpass -p root ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o User=root -p 10000 127.0.0.1 'lsblk | grep -E "^vd." | wc -l' 00:15:47.734 Warning: Permanently added '[127.0.0.1]:10000' (ED25519) to the list of known hosts. 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@166 -- # [[ 0 -eq 0 ]] 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@168 -- # key0=1234567890abcdef1234567890abcdef 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@169 -- # rpc_cmd bdev_malloc_create -b malloc0 32 4096 00:15:48.666 00:23:19 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.666 00:23:19 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:48.666 malloc0 00:15:48.666 00:23:19 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # rpc_cmd bdev_get_bdevs -b malloc0 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # jq -r '.[].uuid' 00:15:48.666 00:23:19 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.666 00:23:19 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:48.666 00:23:19 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@170 -- # uuid=24dd5eee-6bdf-44ae-8454-daf4eb8f32e6 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r .handle 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # uuid2base64 24dd5eee-6bdf-44ae-8454-daf4eb8f32e6 00:15:48.666 00:23:19 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # get_cipher AES_CBC 00:15:48.666 00:23:19 sma.sma_vhost -- sma/common.sh@27 -- # case "$1" in 00:15:48.666 00:23:19 sma.sma_vhost -- sma/common.sh@28 -- # echo 0 00:15:48.666 00:23:19 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # format_key 1234567890abcdef1234567890abcdef 00:15:48.666 00:23:19 sma.sma_vhost -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63 00:15:48.666 00:23:19 sma.sma_vhost -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:15:48.924 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:48.924 I0000 00:00:1728426199.353201 2121326 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:48.924 I0000 00:00:1728426199.354701 2121326 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:48.924 I0000 00:00:1728426199.356457 2121336 subchannel.cc:806] subchannel 0x55dba319e220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55dba30af670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55dba313dcc0, grpc.internal.client_channel_call_destination=0x7f66bed7f390, grpc.internal.event_engine=0x55dba304d190, grpc.internal.security_connector=0x55dba30576e0, grpc.internal.subchannel_pool=0x55dba31c5cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55dba2f945c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:19.355437227+02:00"}), backing off for 1000 ms 00:15:48.924 VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 272 00:15:48.924 VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 271 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:274 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:275 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@192 -- # devid0=virtio_blk:sma-0 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # rpc_cmd vhost_get_controllers 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # jq -r '. | length' 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008): 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 83 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 276 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:83 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:274 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150007646 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) guest physical addr: 0x0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) guest virtual addr: 0x7f3337e00000 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) host virtual addr: 0x7f6f15200000 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) mmap addr : 0x7f6f15200000 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) mmap size : 0x40000000 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) mmap align: 0x200000 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) mmap off : 0x0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0. 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:277 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0. 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:278 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f): 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 1 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 1 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 1 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:49.857 VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing. 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@194 -- # [[ 1 -eq 1 ]] 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # rpc_cmd vhost_get_controllers 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # jq -r '.[].backend_specific.block.bdev' 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@195 -- # bdev=db0dca2f-e729-40f0-a811-5afbe9da8206 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # rpc_cmd bdev_get_bdevs 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # jq -r '.[] | select(.product_name == "crypto")' 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:49.857 00:23:20 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.857 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@197 -- # crypto_bdev='{ 00:15:49.857 "name": "db0dca2f-e729-40f0-a811-5afbe9da8206", 00:15:49.857 "aliases": [ 00:15:49.857 "c7fafd08-20eb-5952-8582-bc0e8fa6be58" 00:15:49.857 ], 00:15:49.857 "product_name": "crypto", 00:15:49.857 "block_size": 4096, 00:15:49.857 "num_blocks": 8192, 00:15:49.857 "uuid": "c7fafd08-20eb-5952-8582-bc0e8fa6be58", 00:15:49.857 "assigned_rate_limits": { 00:15:49.857 "rw_ios_per_sec": 0, 00:15:49.857 "rw_mbytes_per_sec": 0, 00:15:49.857 "r_mbytes_per_sec": 0, 00:15:49.857 "w_mbytes_per_sec": 0 00:15:49.857 }, 00:15:49.857 "claimed": false, 00:15:49.857 "zoned": false, 00:15:49.857 "supported_io_types": { 00:15:49.857 "read": true, 00:15:49.857 "write": true, 00:15:49.857 "unmap": true, 00:15:49.857 "flush": true, 00:15:49.857 "reset": true, 00:15:49.857 "nvme_admin": false, 00:15:49.857 "nvme_io": false, 00:15:49.857 "nvme_io_md": false, 00:15:49.857 "write_zeroes": true, 00:15:49.857 "zcopy": false, 00:15:49.857 "get_zone_info": false, 00:15:49.857 "zone_management": false, 00:15:49.857 "zone_append": false, 00:15:49.857 "compare": false, 00:15:49.857 "compare_and_write": false, 00:15:49.857 "abort": false, 00:15:49.857 "seek_hole": false, 00:15:49.857 "seek_data": false, 00:15:49.857 "copy": false, 00:15:49.857 "nvme_iov_md": false 00:15:49.857 }, 00:15:49.857 "memory_domains": [ 00:15:49.857 { 00:15:49.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.858 "dma_device_type": 2 00:15:49.858 } 00:15:49.858 ], 00:15:49.858 "driver_specific": { 00:15:49.858 "crypto": { 00:15:49.858 "base_bdev_name": "malloc0", 00:15:49.858 "name": "db0dca2f-e729-40f0-a811-5afbe9da8206", 00:15:49.858 "key_name": "db0dca2f-e729-40f0-a811-5afbe9da8206_AES_CBC" 00:15:49.858 } 00:15:49.858 } 00:15:49.858 }' 00:15:49.858 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # jq -r .driver_specific.crypto.name 00:15:49.858 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@198 -- # [[ db0dca2f-e729-40f0-a811-5afbe9da8206 == \d\b\0\d\c\a\2\f\-\e\7\2\9\-\4\0\f\0\-\a\8\1\1\-\5\a\f\b\e\9\d\a\8\2\0\6 ]] 00:15:49.858 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # jq -r .driver_specific.crypto.key_name 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@199 -- # key_name=db0dca2f-e729-40f0-a811-5afbe9da8206_AES_CBC 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # rpc_cmd accel_crypto_keys_get -k db0dca2f-e729-40f0-a811-5afbe9da8206_AES_CBC 00:15:50.115 00:23:20 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.115 00:23:20 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:50.115 00:23:20 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@200 -- # key_obj='[ 00:15:50.115 { 00:15:50.115 "name": "db0dca2f-e729-40f0-a811-5afbe9da8206_AES_CBC", 00:15:50.115 "cipher": "AES_CBC", 00:15:50.115 "key": "1234567890abcdef1234567890abcdef" 00:15:50.115 } 00:15:50.115 ]' 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # jq -r '.[0].key' 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@201 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]] 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # jq -r '.[0].cipher' 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@202 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]] 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@205 -- # delete_device virtio_blk:sma-0 00:15:50.115 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:50.374 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:50.374 I0000 00:00:1728426200.779155 2121602 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:50.374 I0000 00:00:1728426200.780765 2121602 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:50.374 I0000 00:00:1728426200.782460 2121610 subchannel.cc:806] subchannel 0x55c996abb220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c9969cc670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c996a5acc0, grpc.internal.client_channel_call_destination=0x7fe059145390, grpc.internal.event_engine=0x55c99696a190, grpc.internal.security_connector=0x55c9969746e0, grpc.internal.subchannel_pool=0x55c996ae2cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c9968b15c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:20.781447298+02:00"}), backing off for 1000 ms 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000): 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 1 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:35 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:1 00:15:50.374 VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed 00:15:50.374 {} 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # rpc_cmd bdev_get_bdevs 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r '.[] | select(.product_name == "crypto")' 00:15:50.374 00:23:20 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # jq -r length 00:15:50.374 00:23:20 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 00:23:20 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@206 -- # [[ '' -eq 0 ]] 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@209 -- # device_vhost=2 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # rpc_cmd bdev_get_bdevs -b null0 00:15:50.374 00:23:20 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # jq -r '.[].uuid' 00:15:50.374 00:23:20 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.374 00:23:20 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:50.374 00:23:21 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.632 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@210 -- # uuid=0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:50.632 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # jq -r .handle 00:15:50.632 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # create_device 0 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:50.632 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:50.632 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@20 -- # uuid2base64 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:50.632 00:23:21 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:50.632 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:50.632 I0000 00:00:1728426201.263899 2121790 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:50.632 I0000 00:00:1728426201.265393 2121790 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:50.890 I0000 00:00:1728426201.269678 2121847 subchannel.cc:806] subchannel 0x558c47631220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558c47542670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558c475d0cc0, grpc.internal.client_channel_call_destination=0x7f947d474390, grpc.internal.event_engine=0x558c474e0190, grpc.internal.security_connector=0x558c474ea6e0, grpc.internal.subchannel_pool=0x558c47658cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558c474275c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:21.268660315+02:00"}), backing off for 1000 ms 00:15:50.890 VHOST_CONFIG: (/var/tmp/sma-0) vhost-user server: socket created, fd: 272 00:15:50.890 VHOST_CONFIG: (/var/tmp/sma-0) binding succeeded 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) new vhost user connection is 83 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) new device, handle is 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_PROTOCOL_FEATURES 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_PROTOCOL_FEATURES 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Vhost-user protocol features: 0x11ebf 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_QUEUE_NUM 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_BACKEND_REQ_FD 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_OWNER 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_FEATURES 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:274 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:275 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ERR 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_CONFIG 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000008): 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_INFLIGHT_FD 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd num_queues: 2 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) get_inflight_fd queue_size: 128 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_size: 4224 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) send inflight mmap_offset: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) send inflight fd: 271 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_INFLIGHT_FD 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_size: 4224 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd mmap_offset: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd num_queues: 2 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd queue_size: 128 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd fd: 276 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set_inflight_fd pervq_inflight_size: 2112 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:0 file:271 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_CALL 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring call idx:1 file:274 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_FEATURES 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) negotiated Virtio features: 0x150005446 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_MEM_TABLE 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) guest memory region size: 0x40000000 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) guest physical addr: 0x0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) guest virtual addr: 0x7f3337e00000 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) host virtual addr: 0x7f6ed5000000 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) mmap addr : 0x7f6ed5000000 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) mmap size : 0x40000000 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) mmap align: 0x200000 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) mmap off : 0x0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 last_used_idx:0 last_avail_idx:0. 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:0 file:277 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_NUM 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_BASE 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 last_used_idx:0 last_avail_idx:0. 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ADDR 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_KICK 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) vring kick idx:1 file:278 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 1 to qp idx: 1 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_STATUS 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x0000000f): 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 1 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 1 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 1 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 1 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:51.456 VHOST_CONFIG: (/var/tmp/sma-0) virtio is now ready for processing. 00:15:51.456 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@211 -- # device=virtio_blk:sma-0 00:15:51.456 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys 00:15:51.456 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # diff /dev/fd/62 /dev/fd/61 00:15:51.456 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # jq --sort-keys 00:15:51.456 00:23:21 sma.sma_vhost -- sma/vhost_blk.sh@214 -- # get_qos_caps 2 00:15:51.456 00:23:21 sma.sma_vhost -- sma/common.sh@45 -- # local rootdir 00:15:51.456 00:23:21 sma.sma_vhost -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:15:51.456 00:23:21 sma.sma_vhost -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../.. 00:15:51.457 00:23:21 sma.sma_vhost -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py 00:15:51.713 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:51.714 I0000 00:00:1728426202.122648 2121887 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:51.714 I0000 00:00:1728426202.124189 2121887 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:51.714 I0000 00:00:1728426202.125890 2121889 subchannel.cc:806] subchannel 0x55b4fc68ff70 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b4fc55e070, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b4fc462320, grpc.internal.client_channel_call_destination=0x7f030b36a390, grpc.internal.event_engine=0x55b4fc4e0ea0, grpc.internal.security_connector=0x55b4fc4794b0, grpc.internal.subchannel_pool=0x55b4fc517620, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b4fc34adf0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:22.124857839+02:00"}), backing off for 1000 ms 00:15:51.714 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:51.714 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@233 -- # uuid2base64 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:51.714 00:23:22 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:51.971 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:51.971 I0000 00:00:1728426202.390549 2121915 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:51.971 I0000 00:00:1728426202.392081 2121915 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:51.971 I0000 00:00:1728426202.393850 2122048 subchannel.cc:806] subchannel 0x55f7960d3220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f795fe4670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f796072cc0, grpc.internal.client_channel_call_destination=0x7f4bedaf6390, grpc.internal.event_engine=0x55f796000360, grpc.internal.security_connector=0x55f795f8c6e0, grpc.internal.subchannel_pool=0x55f7960facc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f795ec95c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:22.392840829+02:00"}), backing off for 1000 ms 00:15:51.971 {} 00:15:51.971 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # diff /dev/fd/62 /dev/fd/61 00:15:51.971 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys 00:15:51.971 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # rpc_cmd bdev_get_bdevs -b 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:51.971 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@252 -- # jq --sort-keys '.[].assigned_rate_limits' 00:15:51.971 00:23:22 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.971 00:23:22 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:51.971 00:23:22 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.971 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@264 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.229 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:52.229 I0000 00:00:1728426202.694803 2122149 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:52.229 I0000 00:00:1728426202.696399 2122149 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:52.230 I0000 00:00:1728426202.698110 2122156 subchannel.cc:806] subchannel 0x55895af26220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55895ae37670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55895aec5cc0, grpc.internal.client_channel_call_destination=0x7f056ab93390, grpc.internal.event_engine=0x55895add5190, grpc.internal.security_connector=0x55895af3beb0, grpc.internal.subchannel_pool=0x55895af4dcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55895ad1c5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:22.697106698+02:00"}), backing off for 999 ms 00:15:52.230 {} 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # diff /dev/fd/62 /dev/fd/61 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # rpc_cmd bdev_get_bdevs -b 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@283 -- # jq --sort-keys '.[].assigned_rate_limits' 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuidgen 00:15:52.230 00:23:22 sma.sma_vhost -- sma/vhost_blk.sh@295 -- # uuid2base64 2a699e33-bb06-4bd3-a839-3eac60a56951 00:15:52.230 00:23:22 sma.sma_vhost -- sma/common.sh@20 -- # python 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@650 -- # local es=0 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:15:52.230 00:23:22 sma.sma_vhost -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.487 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:52.487 I0000 00:00:1728426203.035776 2122188 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:52.487 I0000 00:00:1728426203.037419 2122188 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:52.487 I0000 00:00:1728426203.039189 2122192 subchannel.cc:806] subchannel 0x55ae3d9e1220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ae3d8f2670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ae3d980cc0, grpc.internal.client_channel_call_destination=0x7f211fadb390, grpc.internal.event_engine=0x55ae3d90e360, grpc.internal.security_connector=0x55ae3d89a6e0, grpc.internal.subchannel_pool=0x55ae3da08cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ae3d7d75c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:23.038176557+02:00"}), backing off for 1000 ms 00:15:52.487 [2024-10-09 00:23:23.073538] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: 2a699e33-bb06-4bd3-a839-3eac60a56951 00:15:52.487 Traceback (most recent call last): 00:15:52.487 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:15:52.487 main(sys.argv[1:]) 00:15:52.487 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:15:52.487 result = client.call(request['method'], request.get('params', {})) 00:15:52.487 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.487 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:15:52.487 response = func(request=json_format.ParseDict(params, input())) 00:15:52.487 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.487 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:15:52.487 return _end_unary_response_blocking(state, call, False, None) 00:15:52.487 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.487 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:15:52.487 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:15:52.487 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.487 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:15:52.487 status = StatusCode.INVALID_ARGUMENT 00:15:52.487 details = "Specified volume is not attached to the device" 00:15:52.487 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:23.077852929+02:00", grpc_status:3, grpc_message:"Specified volume is not attached to the device"}" 00:15:52.487 > 00:15:52.487 00:23:23 sma.sma_vhost -- common/autotest_common.sh@653 -- # es=1 00:15:52.487 00:23:23 sma.sma_vhost -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:52.487 00:23:23 sma.sma_vhost -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:52.487 00:23:23 sma.sma_vhost -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:52.487 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.487 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@314 -- # base64 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@650 -- # local es=0 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:15:52.745 00:23:23 sma.sma_vhost -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:52.745 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:52.745 I0000 00:00:1728426203.307848 2122216 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:52.745 I0000 00:00:1728426203.309353 2122216 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:52.746 I0000 00:00:1728426203.311088 2122221 subchannel.cc:806] subchannel 0x558046654220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558046565670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5580465f3cc0, grpc.internal.client_channel_call_destination=0x7fe788e94390, grpc.internal.event_engine=0x558046503190, grpc.internal.security_connector=0x558046669eb0, grpc.internal.subchannel_pool=0x55804667bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55804644a5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:23.310080011+02:00"}), backing off for 999 ms 00:15:52.746 Traceback (most recent call last): 00:15:52.746 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:15:52.746 main(sys.argv[1:]) 00:15:52.746 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:15:52.746 result = client.call(request['method'], request.get('params', {})) 00:15:52.746 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.746 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:15:52.746 response = func(request=json_format.ParseDict(params, input())) 00:15:52.746 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.746 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:15:52.746 return _end_unary_response_blocking(state, call, False, None) 00:15:52.746 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.746 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:15:52.746 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:15:52.746 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:15:52.746 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:15:52.746 status = StatusCode.INVALID_ARGUMENT 00:15:52.746 details = "Invalid volume uuid" 00:15:52.746 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume uuid", grpc_status:3, created_time:"2024-10-09T00:23:23.318473973+02:00"}" 00:15:52.746 > 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@653 -- # es=1 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:52.746 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # diff /dev/fd/62 /dev/fd/61 00:15:52.746 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # rpc_cmd bdev_get_bdevs -b 0121bf5b-65b6-4e07-ba6c-c4919e470530 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys '.[].assigned_rate_limits' 00:15:52.746 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@333 -- # jq --sort-keys 00:15:52.746 00:23:23 sma.sma_vhost -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.002 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@344 -- # delete_device virtio_blk:sma-0 00:15:53.002 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@37 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:53.002 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:53.002 I0000 00:00:1728426203.563097 2122247 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:53.002 I0000 00:00:1728426203.564696 2122247 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:53.002 I0000 00:00:1728426203.566350 2122260 subchannel.cc:806] subchannel 0x5635bd9ad220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5635bd8be670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5635bd94ccc0, grpc.internal.client_channel_call_destination=0x7efd36fa3390, grpc.internal.event_engine=0x5635bd85c190, grpc.internal.security_connector=0x5635bd8666e0, grpc.internal.subchannel_pool=0x5635bd9d4cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5635bd7a35c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:23.565342689+02:00"}), backing off for 1000 ms 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_STATUS 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) new device status(0x00000000): 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -RESET: 1 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -ACKNOWLEDGE: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -FEATURES_OK: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -DRIVER_OK: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -DEVICE_NEED_RESET: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) -FAILED: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 0 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_SET_VRING_ENABLE 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) set queue enable: 0 to qp idx: 1 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:0 file:4 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) read message VHOST_USER_GET_VRING_BASE 00:15:53.002 VHOST_CONFIG: (/var/tmp/sma-0) vring base idx:1 file:46 00:15:53.260 VHOST_CONFIG: (/var/tmp/sma-0) vhost peer closed 00:15:53.260 {} 00:15:53.260 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@346 -- # cleanup 00:15:53.260 00:23:23 sma.sma_vhost -- sma/vhost_blk.sh@14 -- # killprocess 2118897 00:15:53.260 00:23:23 sma.sma_vhost -- common/autotest_common.sh@950 -- # '[' -z 2118897 ']' 00:15:53.260 00:23:23 sma.sma_vhost -- common/autotest_common.sh@954 -- # kill -0 2118897 00:15:53.260 00:23:23 sma.sma_vhost -- common/autotest_common.sh@955 -- # uname 00:15:53.260 00:23:23 sma.sma_vhost -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.260 00:23:23 sma.sma_vhost -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118897 00:15:53.518 00:23:23 sma.sma_vhost -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.518 00:23:23 sma.sma_vhost -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.518 00:23:23 sma.sma_vhost -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118897' 00:15:53.518 killing process with pid 2118897 00:15:53.518 00:23:23 sma.sma_vhost -- common/autotest_common.sh@969 -- # kill 2118897 00:15:53.518 00:23:23 sma.sma_vhost -- common/autotest_common.sh@974 -- # wait 2118897 00:15:54.462 00:23:25 sma.sma_vhost -- sma/vhost_blk.sh@15 -- # killprocess 2119126 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@950 -- # '[' -z 2119126 ']' 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@954 -- # kill -0 2119126 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@955 -- # uname 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2119126 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@956 -- # process_name=python3 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2119126' 00:15:54.463 killing process with pid 2119126 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@969 -- # kill 2119126 00:15:54.463 00:23:25 sma.sma_vhost -- common/autotest_common.sh@974 -- # wait 2119126 00:15:54.721 00:23:25 sma.sma_vhost -- sma/vhost_blk.sh@16 -- # vm_kill_all 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@469 -- # local vm 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@470 -- # vm_list_all 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@459 -- # vms=() 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@459 -- # local vms 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@460 -- # vms=("$VM_DIR"/+([0-9])) 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@461 -- # (( 1 > 0 )) 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@462 -- # basename --multiple /root/vhost_test/vms/0 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@470 -- # for vm in $(vm_list_all) 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@471 -- # vm_kill 0 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@435 -- # vm_num_is_valid 0 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@302 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@302 -- # return 0 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@436 -- # local vm_dir=/root/vhost_test/vms/0 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@438 -- # [[ ! -r /root/vhost_test/vms/0/qemu.pid ]] 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@442 -- # local vm_pid 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@443 -- # cat /root/vhost_test/vms/0/qemu.pid 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@443 -- # vm_pid=2115106 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@445 -- # notice 'Killing virtual machine /root/vhost_test/vms/0 (pid=2115106)' 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'Killing virtual machine /root/vhost_test/vms/0 (pid=2115106)' 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@61 -- # false 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out= 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@70 -- # shift 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=2115106)' 00:15:54.721 INFO: Killing virtual machine /root/vhost_test/vms/0 (pid=2115106) 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@447 -- # /bin/kill 2115106 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@448 -- # notice 'process 2115106 killed' 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@94 -- # message INFO 'process 2115106 killed' 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@60 -- # local verbose_out 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@61 -- # false 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@62 -- # verbose_out= 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@69 -- # local msg_type=INFO 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@70 -- # shift 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@71 -- # echo -e 'INFO: process 2115106 killed' 00:15:54.721 INFO: process 2115106 killed 00:15:54.721 00:23:25 sma.sma_vhost -- vhost/common.sh@449 -- # rm -rf /root/vhost_test/vms/0 00:15:54.722 00:23:25 sma.sma_vhost -- vhost/common.sh@474 -- # rm -rf /root/vhost_test/vms 00:15:54.722 00:23:25 sma.sma_vhost -- sma/vhost_blk.sh@347 -- # trap - SIGINT SIGTERM EXIT 00:15:54.722 00:15:54.722 real 0m42.348s 00:15:54.722 user 0m42.545s 00:15:54.722 sys 0m2.471s 00:15:54.722 00:23:25 sma.sma_vhost -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.722 00:23:25 sma.sma_vhost -- common/autotest_common.sh@10 -- # set +x 00:15:54.722 ************************************ 00:15:54.722 END TEST sma_vhost 00:15:54.722 ************************************ 00:15:54.722 00:23:25 sma -- sma/sma.sh@16 -- # run_test sma_crypto /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh 00:15:54.722 00:23:25 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:54.722 00:23:25 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.722 00:23:25 sma -- common/autotest_common.sh@10 -- # set +x 00:15:54.722 ************************************ 00:15:54.722 START TEST sma_crypto 00:15:54.722 ************************************ 00:15:54.722 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/crypto.sh 00:15:54.722 * Looking for test storage... 00:15:54.722 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:15:54.722 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.722 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.722 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.722 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@344 -- # case "$op" in 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@345 -- # : 1 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@365 -- # decimal 1 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@353 -- # local d=1 00:15:54.722 00:23:25 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@355 -- # echo 1 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@366 -- # decimal 2 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@353 -- # local d=2 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@355 -- # echo 2 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.981 00:23:25 sma.sma_crypto -- scripts/common.sh@368 -- # return 0 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.981 --rc genhtml_branch_coverage=1 00:15:54.981 --rc genhtml_function_coverage=1 00:15:54.981 --rc genhtml_legend=1 00:15:54.981 --rc geninfo_all_blocks=1 00:15:54.981 --rc geninfo_unexecuted_blocks=1 00:15:54.981 00:15:54.981 ' 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.981 --rc genhtml_branch_coverage=1 00:15:54.981 --rc genhtml_function_coverage=1 00:15:54.981 --rc genhtml_legend=1 00:15:54.981 --rc geninfo_all_blocks=1 00:15:54.981 --rc geninfo_unexecuted_blocks=1 00:15:54.981 00:15:54.981 ' 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.981 --rc genhtml_branch_coverage=1 00:15:54.981 --rc genhtml_function_coverage=1 00:15:54.981 --rc genhtml_legend=1 00:15:54.981 --rc geninfo_all_blocks=1 00:15:54.981 --rc geninfo_unexecuted_blocks=1 00:15:54.981 00:15:54.981 ' 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.981 --rc genhtml_branch_coverage=1 00:15:54.981 --rc genhtml_function_coverage=1 00:15:54.981 --rc genhtml_legend=1 00:15:54.981 --rc geninfo_all_blocks=1 00:15:54.981 --rc geninfo_unexecuted_blocks=1 00:15:54.981 00:15:54.981 ' 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@13 -- # rpc_py=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@14 -- # localnqn=nqn.2016-06.io.spdk:cnode0 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@15 -- # tgtnqn=nqn.2016-06.io.spdk:tgt0 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@16 -- # key0=1234567890abcdef1234567890abcdef 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@17 -- # key1=deadbeefcafebabefeedbeefbabecafe 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@18 -- # tgtsock=/var/tmp/spdk.sock2 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@19 -- # discovery_port=8009 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@145 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@148 -- # hostpid=2122779 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@150 -- # waitforlisten 2122779 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@831 -- # '[' -z 2122779 ']' 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.981 00:23:25 sma.sma_crypto -- sma/crypto.sh@147 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --wait-for-rpc 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.981 00:23:25 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:15:54.981 [2024-10-09 00:23:25.457951] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:15:54.981 [2024-10-09 00:23:25.458040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122779 ] 00:15:54.981 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.981 [2024-10-09 00:23:25.563256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.246 [2024-10-09 00:23:25.754558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.817 00:23:26 sma.sma_crypto -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.817 00:23:26 sma.sma_crypto -- common/autotest_common.sh@864 -- # return 0 00:15:55.817 00:23:26 sma.sma_crypto -- sma/crypto.sh@153 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py dpdk_cryptodev_scan_accel_module 00:15:55.817 00:23:26 sma.sma_crypto -- sma/crypto.sh@154 -- # rpc_cmd dpdk_cryptodev_set_driver -d crypto_aesni_mb 00:15:55.817 00:23:26 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.817 00:23:26 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:15:55.817 [2024-10-09 00:23:26.400752] accel_dpdk_cryptodev.c: 224:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:15:55.817 00:23:26 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.817 00:23:26 sma.sma_crypto -- sma/crypto.sh@155 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o encrypt -m dpdk_cryptodev 00:15:56.075 [2024-10-09 00:23:26.581234] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:15:56.075 00:23:26 sma.sma_crypto -- sma/crypto.sh@156 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py accel_assign_opc -o decrypt -m dpdk_cryptodev 00:15:56.333 [2024-10-09 00:23:26.773747] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:15:56.333 00:23:26 sma.sma_crypto -- sma/crypto.sh@157 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:15:56.591 [2024-10-09 00:23:27.192510] accel_dpdk_cryptodev.c:1179:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 1 00:15:57.525 00:23:27 sma.sma_crypto -- sma/crypto.sh@160 -- # tgtpid=2123105 00:15:57.525 00:23:27 sma.sma_crypto -- sma/crypto.sh@172 -- # smapid=2123107 00:15:57.525 00:23:27 sma.sma_crypto -- sma/crypto.sh@175 -- # sma_waitforlisten 00:15:57.525 00:23:27 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:15:57.525 00:23:27 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080 00:15:57.525 00:23:27 sma.sma_crypto -- sma/crypto.sh@159 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/spdk.sock2 -m 0x2 00:15:57.525 00:23:27 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 )) 00:15:57.525 00:23:27 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 )) 00:15:57.525 00:23:27 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:15:57.525 00:23:27 sma.sma_crypto -- sma/crypto.sh@162 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:15:57.525 00:23:27 sma.sma_crypto -- sma/crypto.sh@162 -- # cat 00:15:57.525 00:23:27 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s 00:15:57.525 [2024-10-09 00:23:27.888882] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:15:57.525 [2024-10-09 00:23:27.888986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123105 ] 00:15:57.525 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.525 [2024-10-09 00:23:27.991220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.525 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:57.525 I0000 00:00:1728426208.005208 2123107 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:57.525 [2024-10-09 00:23:28.017718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.783 [2024-10-09 00:23:28.197738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.351 00:23:28 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ )) 00:15:58.351 00:23:28 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 )) 00:15:58.351 00:23:28 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:15:58.351 00:23:28 sma.sma_crypto -- sma/common.sh@12 -- # return 0 00:15:58.351 00:23:28 sma.sma_crypto -- sma/crypto.sh@178 -- # uuidgen 00:15:58.351 00:23:28 sma.sma_crypto -- sma/crypto.sh@178 -- # uuid=77453fe9-4663-4691-8a6f-a9e33a0a7519 00:15:58.351 00:23:28 sma.sma_crypto -- sma/crypto.sh@179 -- # waitforlisten 2123105 /var/tmp/spdk.sock2 00:15:58.351 00:23:28 sma.sma_crypto -- common/autotest_common.sh@831 -- # '[' -z 2123105 ']' 00:15:58.351 00:23:28 sma.sma_crypto -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock2 00:15:58.351 00:23:28 sma.sma_crypto -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.351 00:23:28 sma.sma_crypto -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2...' 00:15:58.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock2... 00:15:58.351 00:23:28 sma.sma_crypto -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.351 00:23:28 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:15:58.608 00:23:29 sma.sma_crypto -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.608 00:23:29 sma.sma_crypto -- common/autotest_common.sh@864 -- # return 0 00:15:58.608 00:23:29 sma.sma_crypto -- sma/crypto.sh@180 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock2 00:15:58.866 [2024-10-09 00:23:29.290238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.866 [2024-10-09 00:23:29.306566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 8009 *** 00:15:58.866 [2024-10-09 00:23:29.314399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:15:58.866 malloc0 00:15:58.866 00:23:29 sma.sma_crypto -- sma/crypto.sh@190 -- # create_device 00:15:58.866 00:23:29 sma.sma_crypto -- sma/crypto.sh@190 -- # jq -r .handle 00:15:58.866 00:23:29 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:59.123 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:59.123 I0000 00:00:1728426209.528431 2123456 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:59.123 I0000 00:00:1728426209.529948 2123456 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:59.123 I0000 00:00:1728426209.531668 2123499 subchannel.cc:806] subchannel 0x560760958220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560760869670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5607608f7cc0, grpc.internal.client_channel_call_destination=0x7ff352254390, grpc.internal.event_engine=0x560760885360, grpc.internal.security_connector=0x5607608116e0, grpc.internal.subchannel_pool=0x56076097fcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56076074e5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:29.530652981+02:00"}), backing off for 1000 ms 00:15:59.123 [2024-10-09 00:23:29.554364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@190 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@193 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher= key= key2= config 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:15:59.123 00:23:29 sma.sma_crypto -- sma/common.sh@20 -- # python 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:15:59.123 "nvmf": { 00:15:59.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:59.123 "discovery": { 00:15:59.123 "discovery_endpoints": [ 00:15:59.123 { 00:15:59.123 "trtype": "tcp", 00:15:59.123 "traddr": "127.0.0.1", 00:15:59.123 "trsvcid": "8009" 00:15:59.123 } 00:15:59.123 ] 00:15:59.123 } 00:15:59.123 }' 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n '' ]] 00:15:59.123 00:23:29 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:15:59.381 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:15:59.381 I0000 00:00:1728426209.819098 2123520 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:15:59.381 I0000 00:00:1728426209.820573 2123520 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:15:59.381 I0000 00:00:1728426209.822334 2123537 subchannel.cc:806] subchannel 0x559f27fdd220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x559f27eee670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x559f27f7ccc0, grpc.internal.client_channel_call_destination=0x7f126afe0390, grpc.internal.event_engine=0x559f27dca990, grpc.internal.security_connector=0x559f28062f30, grpc.internal.subchannel_pool=0x559f28062d90, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x559f27f40c40, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:29.821319261+02:00"}), backing off for 1000 ms 00:16:00.754 {} 00:16:00.754 00:23:30 sma.sma_crypto -- sma/crypto.sh@195 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:00.754 00:23:30 sma.sma_crypto -- sma/crypto.sh@195 -- # jq -r '.[0].namespaces[0].name' 00:16:00.754 00:23:30 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.754 00:23:30 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@195 -- # ns_bdev=aa6f6222-d64b-4a6f-a92c-4a82ce4802990n1 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@196 -- # rpc_cmd bdev_get_bdevs -b aa6f6222-d64b-4a6f-a92c-4a82ce4802990n1 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@196 -- # jq -r '.[0].product_name' 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@196 -- # [[ NVMe disk == \N\V\M\e\ \d\i\s\k ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@197 -- # rpc_cmd bdev_get_bdevs 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@197 -- # jq -r '[.[] | select(.product_name == "crypto")] | length' 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@197 -- # [[ 0 -eq 0 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@198 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@198 -- # jq -r '.[0].namespaces[0].uuid' 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@198 -- # [[ 77453fe9-4663-4691-8a6f-a9e33a0a7519 == \7\7\4\5\3\f\e\9\-\4\6\6\3\-\4\6\9\1\-\8\a\6\f\-\a\9\e\3\3\a\0\a\7\5\1\9 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@199 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@199 -- # jq -r '.[0].namespaces[0].nguid' 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:00.754 00:23:31 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@199 -- # uuid2nguid 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:00.754 00:23:31 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=77453FE9-4663-4691-8A6F-A9E33A0A7519 00:16:00.754 00:23:31 sma.sma_crypto -- sma/common.sh@41 -- # echo 77453FE9466346918A6FA9E33A0A7519 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@199 -- # [[ 77453FE9466346918A6FA9E33A0A7519 == \7\7\4\5\3\F\E\9\4\6\6\3\4\6\9\1\8\A\6\F\A\9\E\3\3\A\0\A\7\5\1\9 ]] 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@201 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:00.754 00:23:31 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:00.754 00:23:31 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:01.012 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:01.012 I0000 00:00:1728426211.411487 2123792 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:01.012 I0000 00:00:1728426211.413036 2123792 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:01.012 I0000 00:00:1728426211.414732 2123797 subchannel.cc:806] subchannel 0x560891324220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x560891235670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5608912c3cc0, grpc.internal.client_channel_call_destination=0x7f3cb412b390, grpc.internal.event_engine=0x560891251360, grpc.internal.security_connector=0x5608911dd6e0, grpc.internal.subchannel_pool=0x56089134bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56089111a5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:31.413716464+02:00"}), backing off for 1000 ms 00:16:01.012 {} 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@204 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:01.012 00:23:31 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:01.012 "nvmf": { 00:16:01.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:01.012 "discovery": { 00:16:01.012 "discovery_endpoints": [ 00:16:01.012 { 00:16:01.012 "trtype": "tcp", 00:16:01.012 "traddr": "127.0.0.1", 00:16:01.012 "trsvcid": "8009" 00:16:01.012 } 00:16:01.012 ] 00:16:01.012 } 00:16:01.012 }' 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]] 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC 00:16:01.012 00:23:31 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:01.012 00:23:31 sma.sma_crypto -- sma/common.sh@28 -- # echo 0 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:01.012 00:23:31 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:01.012 00:23:31 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:01.012 "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:01.012 }' 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:01.012 00:23:31 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:01.284 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:01.285 I0000 00:00:1728426211.759290 2123821 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:01.285 I0000 00:00:1728426211.760769 2123821 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:01.285 I0000 00:00:1728426211.762596 2123837 subchannel.cc:806] subchannel 0x56183dc09220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x56183db1a670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x56183dba8cc0, grpc.internal.client_channel_call_destination=0x7f75a842a390, grpc.internal.event_engine=0x56183dab8190, grpc.internal.security_connector=0x56183dac26e0, grpc.internal.subchannel_pool=0x56183dc30cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x56183d9ff5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:31.761572113+02:00"}), backing off for 1000 ms 00:16:02.662 {} 00:16:02.662 00:23:32 sma.sma_crypto -- sma/crypto.sh@206 -- # rpc_cmd bdev_nvme_get_discovery_info 00:16:02.662 00:23:32 sma.sma_crypto -- sma/crypto.sh@206 -- # jq -r '. | length' 00:16:02.662 00:23:32 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.662 00:23:32 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 00:23:32 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.662 00:23:32 sma.sma_crypto -- sma/crypto.sh@206 -- # [[ 1 -eq 1 ]] 00:16:02.662 00:23:32 sma.sma_crypto -- sma/crypto.sh@207 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:02.662 00:23:32 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.662 00:23:32 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 00:23:32 sma.sma_crypto -- sma/crypto.sh@207 -- # jq -r '.[0].namespaces | length' 00:16:02.662 00:23:32 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@207 -- # [[ 1 -eq 1 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@209 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=77453fe9-4663-4691-8a6f-a9e33a0a7519 ns ns_bdev 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]' 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{ 00:16:02.662 "nsid": 1, 00:16:02.662 "bdev_name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:02.662 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:02.662 "nguid": "77453FE9466346918A6FA9E33A0A7519", 00:16:02.662 "uuid": "77453fe9-4663-4691-8a6f-a9e33a0a7519" 00:16:02.662 }' 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=c3aadc06-d161-41ac-b41d-797a771237ad 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b c3aadc06-d161-41ac-b41d-797a771237ad 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name' 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length' 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 77453fe9-4663-4691-8a6f-a9e33a0a7519 == \7\7\4\5\3\f\e\9\-\4\6\6\3\-\4\6\9\1\-\8\a\6\f\-\a\9\e\3\3\a\0\a\7\5\1\9 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:02.662 00:23:33 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=77453FE9-4663-4691-8A6F-A9E33A0A7519 00:16:02.662 00:23:33 sma.sma_crypto -- sma/common.sh@41 -- # echo 77453FE9466346918A6FA9E33A0A7519 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 77453FE9466346918A6FA9E33A0A7519 == \7\7\4\5\3\F\E\9\4\6\6\3\4\6\9\1\8\A\6\F\A\9\E\3\3\A\0\A\7\5\1\9 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@211 -- # rpc_cmd bdev_get_bdevs 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@211 -- # jq -r '.[] | select(.product_name == "crypto")' 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.662 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.662 00:23:33 sma.sma_crypto -- sma/crypto.sh@211 -- # crypto_bdev='{ 00:16:02.662 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:02.662 "aliases": [ 00:16:02.662 "68cbf8b6-b517-57f2-8b8f-04eadc5f50c3" 00:16:02.662 ], 00:16:02.662 "product_name": "crypto", 00:16:02.662 "block_size": 4096, 00:16:02.662 "num_blocks": 8192, 00:16:02.662 "uuid": "68cbf8b6-b517-57f2-8b8f-04eadc5f50c3", 00:16:02.662 "assigned_rate_limits": { 00:16:02.662 "rw_ios_per_sec": 0, 00:16:02.662 "rw_mbytes_per_sec": 0, 00:16:02.662 "r_mbytes_per_sec": 0, 00:16:02.662 "w_mbytes_per_sec": 0 00:16:02.662 }, 00:16:02.662 "claimed": true, 00:16:02.662 "claim_type": "exclusive_write", 00:16:02.662 "zoned": false, 00:16:02.662 "supported_io_types": { 00:16:02.662 "read": true, 00:16:02.662 "write": true, 00:16:02.662 "unmap": true, 00:16:02.662 "flush": true, 00:16:02.662 "reset": true, 00:16:02.662 "nvme_admin": false, 00:16:02.662 "nvme_io": false, 00:16:02.662 "nvme_io_md": false, 00:16:02.662 "write_zeroes": true, 00:16:02.662 "zcopy": false, 00:16:02.662 "get_zone_info": false, 00:16:02.662 "zone_management": false, 00:16:02.662 "zone_append": false, 00:16:02.662 "compare": false, 00:16:02.662 "compare_and_write": false, 00:16:02.662 "abort": false, 00:16:02.662 "seek_hole": false, 00:16:02.662 "seek_data": false, 00:16:02.662 "copy": false, 00:16:02.663 "nvme_iov_md": false 00:16:02.663 }, 00:16:02.663 "memory_domains": [ 00:16:02.663 { 00:16:02.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.663 "dma_device_type": 2 00:16:02.663 } 00:16:02.663 ], 00:16:02.663 "driver_specific": { 00:16:02.663 "crypto": { 00:16:02.663 "base_bdev_name": "e31195e0-753a-4693-b5b5-9132a3ba5ab40n1", 00:16:02.663 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:02.663 "key_name": "c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC" 00:16:02.663 } 00:16:02.663 } 00:16:02.663 }' 00:16:02.663 00:23:33 sma.sma_crypto -- sma/crypto.sh@212 -- # jq -r .driver_specific.crypto.key_name 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@212 -- # key_name=c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@213 -- # rpc_cmd accel_crypto_keys_get -k c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC 00:16:02.920 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.920 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:02.920 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@213 -- # key_obj='[ 00:16:02.920 { 00:16:02.920 "name": "c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC", 00:16:02.920 "cipher": "AES_CBC", 00:16:02.920 "key": "1234567890abcdef1234567890abcdef" 00:16:02.920 } 00:16:02.920 ]' 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@214 -- # jq -r '.[0].key' 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@214 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]] 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@215 -- # jq -r '.[0].cipher' 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@215 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]] 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@218 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:02.920 00:23:33 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:02.920 00:23:33 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:02.920 "nvmf": { 00:16:02.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:02.920 "discovery": { 00:16:02.920 "discovery_endpoints": [ 00:16:02.920 { 00:16:02.920 "trtype": "tcp", 00:16:02.920 "traddr": "127.0.0.1", 00:16:02.920 "trsvcid": "8009" 00:16:02.921 } 00:16:02.921 ] 00:16:02.921 } 00:16:02.921 }' 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]] 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC 00:16:02.921 00:23:33 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:02.921 00:23:33 sma.sma_crypto -- sma/common.sh@28 -- # echo 0 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:02.921 00:23:33 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:02.921 00:23:33 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:02.921 "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:02.921 }' 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:02.921 00:23:33 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:03.181 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:03.181 I0000 00:00:1728426213.673845 2124241 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:03.181 I0000 00:00:1728426213.675198 2124241 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:03.182 I0000 00:00:1728426213.676994 2124339 subchannel.cc:806] subchannel 0x55fd3b572220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55fd3b483670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55fd3b511cc0, grpc.internal.client_channel_call_destination=0x7fdedee32390, grpc.internal.event_engine=0x55fd3b421190, grpc.internal.security_connector=0x55fd3b42b6e0, grpc.internal.subchannel_pool=0x55fd3b599cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55fd3b3685c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:33.675970914+02:00"}), backing off for 1000 ms 00:16:03.182 {} 00:16:03.182 00:23:33 sma.sma_crypto -- sma/crypto.sh@221 -- # rpc_cmd bdev_nvme_get_discovery_info 00:16:03.182 00:23:33 sma.sma_crypto -- sma/crypto.sh@221 -- # jq -r '. | length' 00:16:03.182 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.182 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.182 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.182 00:23:33 sma.sma_crypto -- sma/crypto.sh@221 -- # [[ 1 -eq 1 ]] 00:16:03.182 00:23:33 sma.sma_crypto -- sma/crypto.sh@222 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:03.182 00:23:33 sma.sma_crypto -- sma/crypto.sh@222 -- # jq -r '.[0].namespaces | length' 00:16:03.182 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.182 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.182 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@222 -- # [[ 1 -eq 1 ]] 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@223 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=77453fe9-4663-4691-8a6f-a9e33a0a7519 ns ns_bdev 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]' 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{ 00:16:03.443 "nsid": 1, 00:16:03.443 "bdev_name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:03.443 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:03.443 "nguid": "77453FE9466346918A6FA9E33A0A7519", 00:16:03.443 "uuid": "77453fe9-4663-4691-8a6f-a9e33a0a7519" 00:16:03.443 }' 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=c3aadc06-d161-41ac-b41d-797a771237ad 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b c3aadc06-d161-41ac-b41d-797a771237ad 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name' 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]] 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length' 00:16:03.443 00:23:33 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.443 00:23:33 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.444 00:23:33 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]] 00:16:03.444 00:23:33 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid 00:16:03.444 00:23:34 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 77453fe9-4663-4691-8a6f-a9e33a0a7519 == \7\7\4\5\3\f\e\9\-\4\6\6\3\-\4\6\9\1\-\8\a\6\f\-\a\9\e\3\3\a\0\a\7\5\1\9 ]] 00:16:03.444 00:23:34 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid 00:16:03.444 00:23:34 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:03.444 00:23:34 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=77453FE9-4663-4691-8A6F-A9E33A0A7519 00:16:03.444 00:23:34 sma.sma_crypto -- sma/common.sh@41 -- # echo 77453FE9466346918A6FA9E33A0A7519 00:16:03.444 00:23:34 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 77453FE9466346918A6FA9E33A0A7519 == \7\7\4\5\3\F\E\9\4\6\6\3\4\6\9\1\8\A\6\F\A\9\E\3\3\A\0\A\7\5\1\9 ]] 00:16:03.444 00:23:34 sma.sma_crypto -- sma/crypto.sh@224 -- # rpc_cmd bdev_get_bdevs 00:16:03.444 00:23:34 sma.sma_crypto -- sma/crypto.sh@224 -- # jq -r '.[] | select(.product_name == "crypto")' 00:16:03.444 00:23:34 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.444 00:23:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@224 -- # crypto_bdev2='{ 00:16:03.701 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:03.701 "aliases": [ 00:16:03.701 "68cbf8b6-b517-57f2-8b8f-04eadc5f50c3" 00:16:03.701 ], 00:16:03.701 "product_name": "crypto", 00:16:03.701 "block_size": 4096, 00:16:03.701 "num_blocks": 8192, 00:16:03.701 "uuid": "68cbf8b6-b517-57f2-8b8f-04eadc5f50c3", 00:16:03.701 "assigned_rate_limits": { 00:16:03.701 "rw_ios_per_sec": 0, 00:16:03.701 "rw_mbytes_per_sec": 0, 00:16:03.701 "r_mbytes_per_sec": 0, 00:16:03.701 "w_mbytes_per_sec": 0 00:16:03.701 }, 00:16:03.701 "claimed": true, 00:16:03.701 "claim_type": "exclusive_write", 00:16:03.701 "zoned": false, 00:16:03.701 "supported_io_types": { 00:16:03.701 "read": true, 00:16:03.701 "write": true, 00:16:03.701 "unmap": true, 00:16:03.701 "flush": true, 00:16:03.701 "reset": true, 00:16:03.701 "nvme_admin": false, 00:16:03.701 "nvme_io": false, 00:16:03.701 "nvme_io_md": false, 00:16:03.701 "write_zeroes": true, 00:16:03.701 "zcopy": false, 00:16:03.701 "get_zone_info": false, 00:16:03.701 "zone_management": false, 00:16:03.701 "zone_append": false, 00:16:03.701 "compare": false, 00:16:03.701 "compare_and_write": false, 00:16:03.701 "abort": false, 00:16:03.701 "seek_hole": false, 00:16:03.701 "seek_data": false, 00:16:03.701 "copy": false, 00:16:03.701 "nvme_iov_md": false 00:16:03.701 }, 00:16:03.701 "memory_domains": [ 00:16:03.701 { 00:16:03.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.701 "dma_device_type": 2 00:16:03.701 } 00:16:03.701 ], 00:16:03.701 "driver_specific": { 00:16:03.701 "crypto": { 00:16:03.701 "base_bdev_name": "e31195e0-753a-4693-b5b5-9132a3ba5ab40n1", 00:16:03.701 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:03.701 "key_name": "c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC" 00:16:03.701 } 00:16:03.701 } 00:16:03.701 }' 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@225 -- # jq -r .name 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@225 -- # [[ c3aadc06-d161-41ac-b41d-797a771237ad == c3aadc06-d161-41ac-b41d-797a771237ad ]] 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@226 -- # jq -r .driver_specific.crypto.key_name 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@226 -- # key_name=c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@227 -- # rpc_cmd accel_crypto_keys_get -k c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@227 -- # key_obj='[ 00:16:03.701 { 00:16:03.701 "name": "c3aadc06-d161-41ac-b41d-797a771237ad_AES_CBC", 00:16:03.701 "cipher": "AES_CBC", 00:16:03.701 "key": "1234567890abcdef1234567890abcdef" 00:16:03.701 } 00:16:03.701 ]' 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@228 -- # jq -r '.[0].key' 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@228 -- # [[ 1234567890abcdef1234567890abcdef == \1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f\1\2\3\4\5\6\7\8\9\0\a\b\c\d\e\f ]] 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@229 -- # jq -r '.[0].cipher' 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@229 -- # [[ AES_CBC == \A\E\S\_\C\B\C ]] 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@232 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_XTS 1234567890abcdef1234567890abcdef 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_XTS 1234567890abcdef1234567890abcdef 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t attach_volume 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.701 00:23:34 sma.sma_crypto -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_XTS 1234567890abcdef1234567890abcdef 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_XTS 1234567890abcdef1234567890abcdef 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_XTS key=1234567890abcdef1234567890abcdef key2= config 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:03.701 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:03.959 00:23:34 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:03.959 "nvmf": { 00:16:03.959 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:03.959 "discovery": { 00:16:03.959 "discovery_endpoints": [ 00:16:03.959 { 00:16:03.959 "trtype": "tcp", 00:16:03.959 "traddr": "127.0.0.1", 00:16:03.959 "trsvcid": "8009" 00:16:03.959 } 00:16:03.959 ] 00:16:03.959 } 00:16:03.959 }' 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_XTS ]] 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_XTS 00:16:03.959 00:23:34 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:03.959 00:23:34 sma.sma_crypto -- sma/common.sh@29 -- # echo 1 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:03.959 00:23:34 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:03.959 00:23:34 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:03.959 "cipher": 1,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:03.959 }' 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:03.959 00:23:34 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:03.959 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:03.959 I0000 00:00:1728426214.571982 2124408 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:03.959 I0000 00:00:1728426214.573843 2124408 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:03.959 I0000 00:00:1728426214.576846 2124424 subchannel.cc:806] subchannel 0x558c970a2220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x558c96fb3670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558c97041cc0, grpc.internal.client_channel_call_destination=0x7fa7fb43a390, grpc.internal.event_engine=0x558c96f51190, grpc.internal.security_connector=0x558c96f5b6e0, grpc.internal.subchannel_pool=0x558c970c9cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x558c96e985c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:34.575786054+02:00"}), backing off for 1000 ms 00:16:04.217 Traceback (most recent call last): 00:16:04.217 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:04.217 main(sys.argv[1:]) 00:16:04.217 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:04.217 result = client.call(request['method'], request.get('params', {})) 00:16:04.217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.217 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:04.217 response = func(request=json_format.ParseDict(params, input())) 00:16:04.217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.217 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:04.217 return _end_unary_response_blocking(state, call, False, None) 00:16:04.217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.217 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:04.217 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:04.217 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.217 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:04.217 status = StatusCode.INVALID_ARGUMENT 00:16:04.217 details = "Invalid volume crypto configuration: bad cipher" 00:16:04.217 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-10-09T00:23:34.59250264+02:00"}" 00:16:04.217 > 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@234 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC deadbeefcafebabefeedbeefbabecafe 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC deadbeefcafebabefeedbeefbabecafe 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t attach_volume 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.217 00:23:34 sma.sma_crypto -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC deadbeefcafebabefeedbeefbabecafe 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC deadbeefcafebabefeedbeefbabecafe 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_CBC key=deadbeefcafebabefeedbeefbabecafe key2= config 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:04.217 00:23:34 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:04.217 "nvmf": { 00:16:04.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:04.217 "discovery": { 00:16:04.217 "discovery_endpoints": [ 00:16:04.217 { 00:16:04.217 "trtype": "tcp", 00:16:04.217 "traddr": "127.0.0.1", 00:16:04.217 "trsvcid": "8009" 00:16:04.217 } 00:16:04.217 ] 00:16:04.217 } 00:16:04.217 }' 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]] 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC 00:16:04.217 00:23:34 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:04.217 00:23:34 sma.sma_crypto -- sma/common.sh@28 -- # echo 0 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key deadbeefcafebabefeedbeefbabecafe 00:16:04.217 00:23:34 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:04.217 00:23:34 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:04.217 "cipher": 0,"key": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU=" 00:16:04.217 }' 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:04.217 00:23:34 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:04.475 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:04.475 I0000 00:00:1728426214.898773 2124465 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:04.475 I0000 00:00:1728426214.900320 2124465 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:04.476 I0000 00:00:1728426214.902165 2124649 subchannel.cc:806] subchannel 0x5583849a9220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5583848ba670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x558384948cc0, grpc.internal.client_channel_call_destination=0x7fac8697f390, grpc.internal.event_engine=0x558384858190, grpc.internal.security_connector=0x5583848626e0, grpc.internal.subchannel_pool=0x5583849d0cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55838479f5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:34.90114104+02:00"}), backing off for 999 ms 00:16:04.476 Traceback (most recent call last): 00:16:04.476 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:04.476 main(sys.argv[1:]) 00:16:04.476 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:04.476 result = client.call(request['method'], request.get('params', {})) 00:16:04.476 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.476 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:04.476 response = func(request=json_format.ParseDict(params, input())) 00:16:04.476 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.476 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:04.476 return _end_unary_response_blocking(state, call, False, None) 00:16:04.476 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.476 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:04.476 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:04.476 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.476 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:04.476 status = StatusCode.INVALID_ARGUMENT 00:16:04.476 details = "Invalid volume crypto configuration: bad key" 00:16:04.476 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key", grpc_status:3, created_time:"2024-10-09T00:23:34.917749033+02:00"}" 00:16:04.476 > 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@236 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t attach_volume 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.476 00:23:34 sma.sma_crypto -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef deadbeefcafebabefeedbeefbabecafe 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2=deadbeefcafebabefeedbeefbabecafe config 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:04.476 00:23:34 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:04.476 00:23:34 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:04.476 "nvmf": { 00:16:04.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:04.476 "discovery": { 00:16:04.476 "discovery_endpoints": [ 00:16:04.476 { 00:16:04.476 "trtype": "tcp", 00:16:04.476 "traddr": "127.0.0.1", 00:16:04.476 "trsvcid": "8009" 00:16:04.476 } 00:16:04.476 ] 00:16:04.476 } 00:16:04.476 }' 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]] 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC 00:16:04.476 00:23:35 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:04.476 00:23:35 sma.sma_crypto -- sma/common.sh@28 -- # echo 0 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:04.476 00:23:35 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:04.476 00:23:35 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n deadbeefcafebabefeedbeefbabecafe ]] 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@55 -- # crypto+=("\"key2\": \"$(format_key $key2)\"") 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@55 -- # format_key deadbeefcafebabefeedbeefbabecafe 00:16:04.476 00:23:35 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:04.476 00:23:35 sma.sma_crypto -- sma/common.sh@35 -- # echo -n deadbeefcafebabefeedbeefbabecafe 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:04.476 "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=","key2": "ZGVhZGJlZWZjYWZlYmFiZWZlZWRiZWVmYmFiZWNhZmU=" 00:16:04.476 }' 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:04.476 00:23:35 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:04.734 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:04.734 I0000 00:00:1728426215.208048 2124697 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:04.734 I0000 00:00:1728426215.209634 2124697 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:04.734 I0000 00:00:1728426215.211418 2124713 subchannel.cc:806] subchannel 0x563b88bee220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x563b88aff670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x563b88b8dcc0, grpc.internal.client_channel_call_destination=0x7ff8573fe390, grpc.internal.event_engine=0x563b88b79480, grpc.internal.security_connector=0x563b88a9c5b0, grpc.internal.subchannel_pool=0x563b88c29b60, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x563b88a5d9f0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:35.210403328+02:00"}), backing off for 1000 ms 00:16:04.734 Traceback (most recent call last): 00:16:04.734 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:04.734 main(sys.argv[1:]) 00:16:04.734 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:04.734 result = client.call(request['method'], request.get('params', {})) 00:16:04.734 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.734 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:04.734 response = func(request=json_format.ParseDict(params, input())) 00:16:04.734 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.734 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:04.734 return _end_unary_response_blocking(state, call, False, None) 00:16:04.734 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.734 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:04.734 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:04.734 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.734 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:04.734 status = StatusCode.INVALID_ARGUMENT 00:16:04.734 details = "Invalid volume crypto configuration: bad key2" 00:16:04.734 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad key2", grpc_status:3, created_time:"2024-10-09T00:23:35.230412125+02:00"}" 00:16:04.734 > 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@238 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t attach_volume 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.734 00:23:35 sma.sma_crypto -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=8 key=1234567890abcdef1234567890abcdef key2= config 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:04.734 00:23:35 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:04.734 "nvmf": { 00:16:04.734 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:04.734 "discovery": { 00:16:04.734 "discovery_endpoints": [ 00:16:04.734 { 00:16:04.734 "trtype": "tcp", 00:16:04.734 "traddr": "127.0.0.1", 00:16:04.734 "trsvcid": "8009" 00:16:04.734 } 00:16:04.734 ] 00:16:04.734 } 00:16:04.734 }' 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]] 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8 00:16:04.734 00:23:35 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:04.734 00:23:35 sma.sma_crypto -- sma/common.sh@30 -- # echo 8 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:04.734 00:23:35 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:04.734 00:23:35 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:04.734 "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:04.734 }' 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:04.734 00:23:35 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:04.993 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:04.993 I0000 00:00:1728426215.516010 2124734 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:04.993 I0000 00:00:1728426215.517542 2124734 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:04.993 I0000 00:00:1728426215.519337 2124747 subchannel.cc:806] subchannel 0x55783eb73220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55783ea84670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55783eb12cc0, grpc.internal.client_channel_call_destination=0x7ff80e586390, grpc.internal.event_engine=0x55783ea22190, grpc.internal.security_connector=0x55783ea2c6e0, grpc.internal.subchannel_pool=0x55783eb9acc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55783e9695c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:35.518327949+02:00"}), backing off for 1000 ms 00:16:04.993 Traceback (most recent call last): 00:16:04.993 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:04.993 main(sys.argv[1:]) 00:16:04.993 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:04.993 result = client.call(request['method'], request.get('params', {})) 00:16:04.993 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.993 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:04.993 response = func(request=json_format.ParseDict(params, input())) 00:16:04.993 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.993 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:04.993 return _end_unary_response_blocking(state, call, False, None) 00:16:04.993 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.993 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:04.993 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:04.993 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:04.993 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:04.993 status = StatusCode.INVALID_ARGUMENT 00:16:04.993 details = "Invalid volume crypto configuration: bad cipher" 00:16:04.993 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Invalid volume crypto configuration: bad cipher", grpc_status:3, created_time:"2024-10-09T00:23:35.535318134+02:00"}" 00:16:04.993 > 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.993 00:23:35 sma.sma_crypto -- sma/crypto.sh@241 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:04.993 00:23:35 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=77453fe9-4663-4691-8a6f-a9e33a0a7519 ns ns_bdev 00:16:04.993 00:23:35 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:04.993 00:23:35 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]' 00:16:04.993 00:23:35 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.993 00:23:35 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{ 00:16:04.993 "nsid": 1, 00:16:04.993 "bdev_name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:04.993 "name": "c3aadc06-d161-41ac-b41d-797a771237ad", 00:16:04.993 "nguid": "77453FE9466346918A6FA9E33A0A7519", 00:16:04.993 "uuid": "77453fe9-4663-4691-8a6f-a9e33a0a7519" 00:16:04.993 }' 00:16:04.993 00:23:35 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=c3aadc06-d161-41ac-b41d-797a771237ad 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b c3aadc06-d161-41ac-b41d-797a771237ad 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name' 00:16:05.251 00:23:35 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.251 00:23:35 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:05.251 00:23:35 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]] 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length' 00:16:05.251 00:23:35 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.251 00:23:35 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:05.251 00:23:35 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]] 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 77453fe9-4663-4691-8a6f-a9e33a0a7519 == \7\7\4\5\3\f\e\9\-\4\6\6\3\-\4\6\9\1\-\8\a\6\f\-\a\9\e\3\3\a\0\a\7\5\1\9 ]] 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:05.251 00:23:35 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=77453FE9-4663-4691-8A6F-A9E33A0A7519 00:16:05.251 00:23:35 sma.sma_crypto -- sma/common.sh@41 -- # echo 77453FE9466346918A6FA9E33A0A7519 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 77453FE9466346918A6FA9E33A0A7519 == \7\7\4\5\3\F\E\9\4\6\6\3\4\6\9\1\8\A\6\F\A\9\E\3\3\A\0\A\7\5\1\9 ]] 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@243 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:05.251 00:23:35 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:05.251 00:23:35 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:05.508 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:05.508 I0000 00:00:1728426216.030248 2124784 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:05.508 I0000 00:00:1728426216.031574 2124784 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:05.508 I0000 00:00:1728426216.033281 2124829 subchannel.cc:806] subchannel 0x55948ef90220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55948eea1670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55948ef2fcc0, grpc.internal.client_channel_call_destination=0x7faa2aa55390, grpc.internal.event_engine=0x55948eebd360, grpc.internal.security_connector=0x55948ee496e0, grpc.internal.subchannel_pool=0x55948efb7cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55948ed865c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:36.032270684+02:00"}), backing off for 1000 ms 00:16:05.508 {} 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@247 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t attach_volume 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:05.508 00:23:36 sma.sma_crypto -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=8 key=1234567890abcdef1234567890abcdef key2= config 00:16:05.508 00:23:36 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:05.766 00:23:36 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:05.766 "nvmf": { 00:16:05.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:05.766 "discovery": { 00:16:05.766 "discovery_endpoints": [ 00:16:05.766 { 00:16:05.766 "trtype": "tcp", 00:16:05.766 "traddr": "127.0.0.1", 00:16:05.766 "trsvcid": "8009" 00:16:05.766 } 00:16:05.766 ] 00:16:05.766 } 00:16:05.766 }' 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]] 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8 00:16:05.766 00:23:36 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:05.766 00:23:36 sma.sma_crypto -- sma/common.sh@30 -- # echo 8 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:05.766 00:23:36 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:05.766 00:23:36 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:05.766 "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:05.766 }' 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:05.766 00:23:36 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:05.766 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:05.766 I0000 00:00:1728426216.396803 2124919 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:05.766 I0000 00:00:1728426216.398511 2124919 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:05.766 I0000 00:00:1728426216.400306 2125038 subchannel.cc:806] subchannel 0x55fc79628220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55fc79539670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55fc795c7cc0, grpc.internal.client_channel_call_destination=0x7fcfdf239390, grpc.internal.event_engine=0x55fc794d7190, grpc.internal.security_connector=0x55fc794e16e0, grpc.internal.subchannel_pool=0x55fc7964fcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55fc7941e5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:36.399310931+02:00"}), backing off for 1000 ms 00:16:07.138 Traceback (most recent call last): 00:16:07.138 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:07.138 main(sys.argv[1:]) 00:16:07.138 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:07.138 result = client.call(request['method'], request.get('params', {})) 00:16:07.138 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:07.138 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:07.138 response = func(request=json_format.ParseDict(params, input())) 00:16:07.138 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:07.138 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:07.138 return _end_unary_response_blocking(state, call, False, None) 00:16:07.138 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:07.138 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:07.138 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:07.138 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:07.138 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:07.139 status = StatusCode.INVALID_ARGUMENT 00:16:07.139 details = "Invalid volume crypto configuration: bad cipher" 00:16:07.139 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:37.516918848+02:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}" 00:16:07.139 > 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@248 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@248 -- # jq -r '.[0].namespaces | length' 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@248 -- # [[ 0 -eq 0 ]] 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@249 -- # rpc_cmd bdev_nvme_get_discovery_info 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@249 -- # jq -r '. | length' 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@249 -- # [[ 0 -eq 0 ]] 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@250 -- # rpc_cmd bdev_get_bdevs 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@250 -- # jq -r length 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 00:23:37 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@250 -- # [[ 0 -eq 0 ]] 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@252 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:07.139 00:23:37 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:07.397 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:07.397 I0000 00:00:1728426217.861619 2125284 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:07.397 I0000 00:00:1728426217.863051 2125284 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:07.397 I0000 00:00:1728426217.864728 2125293 subchannel.cc:806] subchannel 0x55c622c15220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c622b26670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c622bb4cc0, grpc.internal.client_channel_call_destination=0x7feac4835390, grpc.internal.event_engine=0x55c622ac4190, grpc.internal.security_connector=0x55c622c2aeb0, grpc.internal.subchannel_pool=0x55c622c3ccc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c622a0b5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:37.863716017+02:00"}), backing off for 1000 ms 00:16:07.397 {} 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@255 -- # create_device 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@255 -- # jq -r .handle 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:07.397 00:23:37 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:07.397 "nvmf": { 00:16:07.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:07.397 "discovery": { 00:16:07.397 "discovery_endpoints": [ 00:16:07.397 { 00:16:07.397 "trtype": "tcp", 00:16:07.397 "traddr": "127.0.0.1", 00:16:07.397 "trsvcid": "8009" 00:16:07.397 } 00:16:07.397 ] 00:16:07.397 } 00:16:07.397 }' 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]] 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC 00:16:07.397 00:23:37 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:07.397 00:23:37 sma.sma_crypto -- sma/common.sh@28 -- # echo 0 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:07.397 00:23:37 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/63 00:16:07.397 00:23:37 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:07.397 "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:07.397 }' 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:07.397 00:23:37 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:07.667 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:07.667 I0000 00:00:1728426218.157875 2125316 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:07.667 I0000 00:00:1728426218.159385 2125316 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:07.667 I0000 00:00:1728426218.161203 2125338 subchannel.cc:806] subchannel 0x55f17bfa7220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55f17beb8670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55f17bf46cc0, grpc.internal.client_channel_call_destination=0x7f63c749f390, grpc.internal.event_engine=0x55f17be2dcc0, grpc.internal.security_connector=0x55f17c02cfc0, grpc.internal.subchannel_pool=0x55f17c02cde0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55f17bda5bf0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:38.160182968+02:00"}), backing off for 1000 ms 00:16:08.703 [2024-10-09 00:23:39.289666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@255 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@256 -- # verify_crypto_volume nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@132 -- # local nqn=nqn.2016-06.io.spdk:cnode0 uuid=77453fe9-4663-4691-8a6f-a9e33a0a7519 ns ns_bdev 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@134 -- # rpc_cmd nvmf_get_subsystems nqn.2016-06.io.spdk:cnode0 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@134 -- # jq -r '.[0].namespaces[0]' 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@134 -- # ns='{ 00:16:08.971 "nsid": 1, 00:16:08.971 "bdev_name": "e47a193f-0c85-448e-9f8f-dab4d6dc10d5", 00:16:08.971 "name": "e47a193f-0c85-448e-9f8f-dab4d6dc10d5", 00:16:08.971 "nguid": "77453FE9466346918A6FA9E33A0A7519", 00:16:08.971 "uuid": "77453fe9-4663-4691-8a6f-a9e33a0a7519" 00:16:08.971 }' 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@135 -- # jq -r .name 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@135 -- # ns_bdev=e47a193f-0c85-448e-9f8f-dab4d6dc10d5 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@138 -- # rpc_cmd bdev_get_bdevs -b e47a193f-0c85-448e-9f8f-dab4d6dc10d5 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@138 -- # jq -r '.[0].product_name' 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@138 -- # [[ crypto == crypto ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@139 -- # rpc_cmd bdev_get_bdevs 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@139 -- # jq -r '[.[] | select(.product_name == "crypto")] | length' 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:08.971 00:23:39 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@139 -- # [[ 1 -eq 1 ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@141 -- # jq -r .uuid 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@141 -- # [[ 77453fe9-4663-4691-8a6f-a9e33a0a7519 == \7\7\4\5\3\f\e\9\-\4\6\6\3\-\4\6\9\1\-\8\a\6\f\-\a\9\e\3\3\a\0\a\7\5\1\9 ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@142 -- # jq -r .nguid 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@142 -- # uuid2nguid 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:08.971 00:23:39 sma.sma_crypto -- sma/common.sh@40 -- # local uuid=77453FE9-4663-4691-8A6F-A9E33A0A7519 00:16:08.971 00:23:39 sma.sma_crypto -- sma/common.sh@41 -- # echo 77453FE9466346918A6FA9E33A0A7519 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@142 -- # [[ 77453FE9466346918A6FA9E33A0A7519 == \7\7\4\5\3\F\E\9\4\6\6\3\4\6\9\1\8\A\6\F\A\9\E\3\3\A\0\A\7\5\1\9 ]] 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@258 -- # detach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:08.971 00:23:39 sma.sma_crypto -- sma/crypto.sh@120 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:08.972 00:23:39 sma.sma_crypto -- sma/crypto.sh@120 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:08.972 00:23:39 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:09.229 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:09.229 I0000 00:00:1728426219.812272 2125602 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:09.229 I0000 00:00:1728426219.813736 2125602 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:09.229 I0000 00:00:1728426219.815426 2125606 subchannel.cc:806] subchannel 0x55ee28ead220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ee28dbe670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ee28e4ccc0, grpc.internal.client_channel_call_destination=0x7fbd8325a390, grpc.internal.event_engine=0x55ee28dda360, grpc.internal.security_connector=0x55ee28d666e0, grpc.internal.subchannel_pool=0x55ee28ed4cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ee28ca35c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:39.814414607+02:00"}), backing off for 1000 ms 00:16:09.487 {} 00:16:09.487 00:23:39 sma.sma_crypto -- sma/crypto.sh@259 -- # delete_device nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:09.487 00:23:39 sma.sma_crypto -- sma/crypto.sh@94 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:09.487 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:09.487 I0000 00:00:1728426220.113503 2125628 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:09.487 I0000 00:00:1728426220.115007 2125628 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:09.487 I0000 00:00:1728426220.116714 2125643 subchannel.cc:806] subchannel 0x55c0a47ef220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55c0a4700670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55c0a478ecc0, grpc.internal.client_channel_call_destination=0x7f00a70cc390, grpc.internal.event_engine=0x55c0a469e190, grpc.internal.security_connector=0x55c0a4804eb0, grpc.internal.subchannel_pool=0x55c0a4816cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55c0a45e55c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:40.115694512+02:00"}), backing off for 1000 ms 00:16:09.745 {} 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@263 -- # NOT create_device 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg create_device 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=create_device 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t create_device 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.745 00:23:40 sma.sma_crypto -- common/autotest_common.sh@653 -- # create_device 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@77 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 8 1234567890abcdef1234567890abcdef 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=8 key=1234567890abcdef1234567890abcdef key2= config 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:09.745 00:23:40 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:09.745 "nvmf": { 00:16:09.745 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:09.745 "discovery": { 00:16:09.745 "discovery_endpoints": [ 00:16:09.745 { 00:16:09.745 "trtype": "tcp", 00:16:09.745 "traddr": "127.0.0.1", 00:16:09.745 "trsvcid": "8009" 00:16:09.745 } 00:16:09.745 ] 00:16:09.745 } 00:16:09.745 }' 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n 8 ]] 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher 8 00:16:09.745 00:23:40 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:09.745 00:23:40 sma.sma_crypto -- sma/common.sh@30 -- # echo 8 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:09.745 00:23:40 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:09.745 00:23:40 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:09.745 "cipher": 8,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:09.745 }' 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:09.745 00:23:40 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:10.003 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:10.003 I0000 00:00:1728426220.430484 2125684 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:10.003 I0000 00:00:1728426220.431963 2125684 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:10.003 I0000 00:00:1728426220.433807 2125859 subchannel.cc:806] subchannel 0x55e6f5bbe220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e6f5acf670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e6f5b5dcc0, grpc.internal.client_channel_call_destination=0x7f115b517390, grpc.internal.event_engine=0x55e6f5a44cc0, grpc.internal.security_connector=0x55e6f5c43fc0, grpc.internal.subchannel_pool=0x55e6f5c43de0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e6f59bcbf0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:40.432795858+02:00"}), backing off for 1000 ms 00:16:10.934 Traceback (most recent call last): 00:16:10.934 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:10.934 main(sys.argv[1:]) 00:16:10.934 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:10.934 result = client.call(request['method'], request.get('params', {})) 00:16:10.934 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:10.934 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:10.934 response = func(request=json_format.ParseDict(params, input())) 00:16:10.934 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:10.934 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:10.934 return _end_unary_response_blocking(state, call, False, None) 00:16:10.934 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:10.934 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:10.934 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:10.934 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:10.934 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:10.934 status = StatusCode.INVALID_ARGUMENT 00:16:10.934 details = "Invalid volume crypto configuration: bad cipher" 00:16:10.934 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:41.547699551+02:00", grpc_status:3, grpc_message:"Invalid volume crypto configuration: bad cipher"}" 00:16:10.934 > 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@264 -- # rpc_cmd bdev_nvme_get_discovery_info 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@264 -- # jq -r '. | length' 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@264 -- # [[ 0 -eq 0 ]] 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@265 -- # rpc_cmd bdev_get_bdevs 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@265 -- # jq -r length 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@265 -- # [[ 0 -eq 0 ]] 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@266 -- # rpc_cmd nvmf_get_subsystems 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@266 -- # jq -r '[.[] | select(.nqn == "nqn.2016-06.io.spdk:cnode0")] | length' 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@266 -- # [[ 0 -eq 0 ]] 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@269 -- # killprocess 2123107 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@950 -- # '[' -z 2123107 ']' 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@954 -- # kill -0 2123107 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@955 -- # uname 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2123107 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@956 -- # process_name=python3 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2123107' 00:16:11.191 killing process with pid 2123107 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@969 -- # kill 2123107 00:16:11.191 00:23:41 sma.sma_crypto -- common/autotest_common.sh@974 -- # wait 2123107 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@278 -- # smapid=2126116 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@280 -- # sma_waitforlisten 00:16:11.191 00:23:41 sma.sma_crypto -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:16:11.191 00:23:41 sma.sma_crypto -- sma/common.sh@8 -- # local sma_port=8080 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@270 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:16:11.191 00:23:41 sma.sma_crypto -- sma/common.sh@10 -- # (( i = 0 )) 00:16:11.191 00:23:41 sma.sma_crypto -- sma/crypto.sh@270 -- # cat 00:16:11.191 00:23:41 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 )) 00:16:11.191 00:23:41 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:16:11.448 00:23:41 sma.sma_crypto -- sma/common.sh@14 -- # sleep 1s 00:16:11.448 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:11.448 I0000 00:00:1728426222.008602 2126116 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:12.381 00:23:42 sma.sma_crypto -- sma/common.sh@10 -- # (( i++ )) 00:16:12.381 00:23:42 sma.sma_crypto -- sma/common.sh@10 -- # (( i < 5 )) 00:16:12.381 00:23:42 sma.sma_crypto -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:16:12.381 00:23:42 sma.sma_crypto -- sma/common.sh@12 -- # return 0 00:16:12.381 00:23:42 sma.sma_crypto -- sma/crypto.sh@281 -- # create_device 00:16:12.381 00:23:42 sma.sma_crypto -- sma/crypto.sh@281 -- # jq -r .handle 00:16:12.381 00:23:42 sma.sma_crypto -- sma/crypto.sh@77 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:12.638 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:12.638 I0000 00:00:1728426223.052016 2126197 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:12.638 I0000 00:00:1728426223.053522 2126197 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:12.638 I0000 00:00:1728426223.055306 2126294 subchannel.cc:806] subchannel 0x55ed815b4220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ed814c5670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ed81553cc0, grpc.internal.client_channel_call_destination=0x7fe7bf368390, grpc.internal.event_engine=0x55ed814e1360, grpc.internal.security_connector=0x55ed8146d6e0, grpc.internal.subchannel_pool=0x55ed815dbcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ed813aa5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:43.054279707+02:00"}), backing off for 1000 ms 00:16:12.638 [2024-10-09 00:23:43.075879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@281 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@283 -- # NOT attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@650 -- # local es=0 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@652 -- # valid_exec_arg attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@638 -- # local arg=attach_volume 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@642 -- # type -t attach_volume 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:12.638 00:23:43 sma.sma_crypto -- common/autotest_common.sh@653 -- # attach_volume nvmf-tcp:nqn.2016-06.io.spdk:cnode0 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@105 -- # local device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@106 -- # shift 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@108 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@108 -- # gen_volume_params 77453fe9-4663-4691-8a6f-a9e33a0a7519 AES_CBC 1234567890abcdef1234567890abcdef 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@28 -- # local volume_id=77453fe9-4663-4691-8a6f-a9e33a0a7519 cipher=AES_CBC key=1234567890abcdef1234567890abcdef key2= config 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@29 -- # local -a params crypto 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@47 -- # cat 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@47 -- # uuid2base64 77453fe9-4663-4691-8a6f-a9e33a0a7519 00:16:12.638 00:23:43 sma.sma_crypto -- sma/common.sh@20 -- # python 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@47 -- # config='"volume_id": "d0U/6UZjRpGKb6njOgp1GQ==", 00:16:12.638 "nvmf": { 00:16:12.638 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:12.638 "discovery": { 00:16:12.638 "discovery_endpoints": [ 00:16:12.638 { 00:16:12.638 "trtype": "tcp", 00:16:12.638 "traddr": "127.0.0.1", 00:16:12.638 "trsvcid": "8009" 00:16:12.638 } 00:16:12.638 ] 00:16:12.638 } 00:16:12.638 }' 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@48 -- # params+=("$config") 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@50 -- # local IFS=, 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@51 -- # [[ -n AES_CBC ]] 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@52 -- # crypto+=("\"cipher\": $(get_cipher $cipher)") 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@52 -- # get_cipher AES_CBC 00:16:12.638 00:23:43 sma.sma_crypto -- sma/common.sh@27 -- # case "$1" in 00:16:12.638 00:23:43 sma.sma_crypto -- sma/common.sh@28 -- # echo 0 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@53 -- # crypto+=("\"key\": \"$(format_key $key)\"") 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@53 -- # format_key 1234567890abcdef1234567890abcdef 00:16:12.638 00:23:43 sma.sma_crypto -- sma/common.sh@35 -- # base64 -w 0 /dev/fd/62 00:16:12.638 00:23:43 sma.sma_crypto -- sma/common.sh@35 -- # echo -n 1234567890abcdef1234567890abcdef 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@54 -- # [[ -n '' ]] 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@64 -- # cat 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@64 -- # crypto_config='"crypto": { 00:16:12.638 "cipher": 0,"key": "MTIzNDU2Nzg5MGFiY2RlZjEyMzQ1Njc4OTBhYmNkZWY=" 00:16:12.638 }' 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@66 -- # params+=("$crypto_config") 00:16:12.638 00:23:43 sma.sma_crypto -- sma/crypto.sh@69 -- # cat 00:16:12.896 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:12.896 I0000 00:00:1728426223.371359 2126357 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:12.896 I0000 00:00:1728426223.373002 2126357 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:12.896 I0000 00:00:1728426223.374803 2126449 subchannel.cc:806] subchannel 0x55b05fc24220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55b05fb35670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55b05fbc3cc0, grpc.internal.client_channel_call_destination=0x7f5213045390, grpc.internal.event_engine=0x55b05fad3190, grpc.internal.security_connector=0x55b05fadd6e0, grpc.internal.subchannel_pool=0x55b05fc4bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55b05fa1a5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:43.373792868+02:00"}), backing off for 1000 ms 00:16:14.268 Traceback (most recent call last): 00:16:14.268 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:14.268 main(sys.argv[1:]) 00:16:14.268 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:14.268 result = client.call(request['method'], request.get('params', {})) 00:16:14.268 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:14.268 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:14.268 response = func(request=json_format.ParseDict(params, input())) 00:16:14.268 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:14.268 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:14.268 return _end_unary_response_blocking(state, call, False, None) 00:16:14.268 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:14.268 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:14.268 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:14.268 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:14.268 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:14.268 status = StatusCode.INVALID_ARGUMENT 00:16:14.268 details = "Crypto is disabled" 00:16:14.268 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:44.491526082+02:00", grpc_status:3, grpc_message:"Crypto is disabled"}" 00:16:14.268 > 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@653 -- # es=1 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@284 -- # rpc_cmd bdev_nvme_get_discovery_info 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@284 -- # jq -r '. | length' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@284 -- # [[ 0 -eq 0 ]] 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@285 -- # rpc_cmd bdev_get_bdevs 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@285 -- # jq -r length 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@285 -- # [[ 0 -eq 0 ]] 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@287 -- # cleanup 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@22 -- # killprocess 2126116 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@950 -- # '[' -z 2126116 ']' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@954 -- # kill -0 2126116 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@955 -- # uname 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2126116 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@956 -- # process_name=python3 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2126116' 00:16:14.268 killing process with pid 2126116 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@969 -- # kill 2126116 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@974 -- # wait 2126116 00:16:14.268 00:23:44 sma.sma_crypto -- sma/crypto.sh@23 -- # killprocess 2122779 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@950 -- # '[' -z 2122779 ']' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@954 -- # kill -0 2122779 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@955 -- # uname 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2122779 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2122779' 00:16:14.268 killing process with pid 2122779 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@969 -- # kill 2122779 00:16:14.268 00:23:44 sma.sma_crypto -- common/autotest_common.sh@974 -- # wait 2122779 00:16:16.797 00:23:46 sma.sma_crypto -- sma/crypto.sh@24 -- # killprocess 2123105 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@950 -- # '[' -z 2123105 ']' 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@954 -- # kill -0 2123105 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@955 -- # uname 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2123105 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2123105' 00:16:16.797 killing process with pid 2123105 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@969 -- # kill 2123105 00:16:16.797 00:23:46 sma.sma_crypto -- common/autotest_common.sh@974 -- # wait 2123105 00:16:19.336 00:23:49 sma.sma_crypto -- sma/crypto.sh@288 -- # trap - SIGINT SIGTERM EXIT 00:16:19.336 00:16:19.336 real 0m24.285s 00:16:19.336 user 0m49.993s 00:16:19.336 sys 0m2.937s 00:16:19.336 00:23:49 sma.sma_crypto -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.336 00:23:49 sma.sma_crypto -- common/autotest_common.sh@10 -- # set +x 00:16:19.336 ************************************ 00:16:19.336 END TEST sma_crypto 00:16:19.336 ************************************ 00:16:19.336 00:23:49 sma -- sma/sma.sh@17 -- # run_test sma_qos /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh 00:16:19.336 00:23:49 sma -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:19.336 00:23:49 sma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.336 00:23:49 sma -- common/autotest_common.sh@10 -- # set +x 00:16:19.336 ************************************ 00:16:19.336 START TEST sma_qos 00:16:19.336 ************************************ 00:16:19.336 00:23:49 sma.sma_qos -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/qos.sh 00:16:19.336 * Looking for test storage... 00:16:19.336 * Found test storage at /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma 00:16:19.336 00:23:49 sma.sma_qos -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:19.336 00:23:49 sma.sma_qos -- common/autotest_common.sh@1681 -- # lcov --version 00:16:19.336 00:23:49 sma.sma_qos -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:19.336 00:23:49 sma.sma_qos -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@344 -- # case "$op" in 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@345 -- # : 1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@365 -- # decimal 1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@353 -- # local d=1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@355 -- # echo 1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@366 -- # decimal 2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@353 -- # local d=2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@355 -- # echo 2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.336 00:23:49 sma.sma_qos -- scripts/common.sh@368 -- # return 0 00:16:19.336 00:23:49 sma.sma_qos -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.337 00:23:49 sma.sma_qos -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:19.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.337 --rc genhtml_branch_coverage=1 00:16:19.337 --rc genhtml_function_coverage=1 00:16:19.337 --rc genhtml_legend=1 00:16:19.337 --rc geninfo_all_blocks=1 00:16:19.337 --rc geninfo_unexecuted_blocks=1 00:16:19.337 00:16:19.337 ' 00:16:19.337 00:23:49 sma.sma_qos -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:19.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.337 --rc genhtml_branch_coverage=1 00:16:19.337 --rc genhtml_function_coverage=1 00:16:19.337 --rc genhtml_legend=1 00:16:19.337 --rc geninfo_all_blocks=1 00:16:19.337 --rc geninfo_unexecuted_blocks=1 00:16:19.337 00:16:19.337 ' 00:16:19.337 00:23:49 sma.sma_qos -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:19.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.337 --rc genhtml_branch_coverage=1 00:16:19.337 --rc genhtml_function_coverage=1 00:16:19.337 --rc genhtml_legend=1 00:16:19.337 --rc geninfo_all_blocks=1 00:16:19.337 --rc geninfo_unexecuted_blocks=1 00:16:19.337 00:16:19.337 ' 00:16:19.337 00:23:49 sma.sma_qos -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:19.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.337 --rc genhtml_branch_coverage=1 00:16:19.337 --rc genhtml_function_coverage=1 00:16:19.337 --rc genhtml_legend=1 00:16:19.337 --rc geninfo_all_blocks=1 00:16:19.337 --rc geninfo_unexecuted_blocks=1 00:16:19.337 00:16:19.337 ' 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@11 -- # source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@13 -- # smac=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@15 -- # device_nvmf_tcp=3 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@16 -- # printf %u -1 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@16 -- # limit_reserved=18446744073709551615 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@42 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@45 -- # tgtpid=2127451 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@55 -- # smapid=2127452 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@57 -- # sma_waitforlisten 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@44 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/build/bin/spdk_tgt 00:16:19.337 00:23:49 sma.sma_qos -- sma/common.sh@7 -- # local sma_addr=127.0.0.1 00:16:19.337 00:23:49 sma.sma_qos -- sma/common.sh@8 -- # local sma_port=8080 00:16:19.337 00:23:49 sma.sma_qos -- sma/common.sh@10 -- # (( i = 0 )) 00:16:19.337 00:23:49 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 )) 00:16:19.337 00:23:49 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@47 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma.py -c /dev/fd/63 00:16:19.337 00:23:49 sma.sma_qos -- sma/qos.sh@47 -- # cat 00:16:19.337 00:23:49 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s 00:16:19.337 [2024-10-09 00:23:49.800356] Starting SPDK v25.01-pre git sha1 6101e4048 / DPDK 24.03.0 initialization... 00:16:19.337 [2024-10-09 00:23:49.800447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127451 ] 00:16:19.337 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.337 [2024-10-09 00:23:49.906293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.595 [2024-10-09 00:23:50.119842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.161 00:23:50 sma.sma_qos -- sma/common.sh@10 -- # (( i++ )) 00:16:20.161 00:23:50 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 )) 00:16:20.161 00:23:50 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:16:20.161 00:23:50 sma.sma_qos -- sma/common.sh@14 -- # sleep 1s 00:16:20.419 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:20.419 I0000 00:00:1728426230.952419 2127452 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:20.419 [2024-10-09 00:23:50.965298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.352 00:23:51 sma.sma_qos -- sma/common.sh@10 -- # (( i++ )) 00:16:21.352 00:23:51 sma.sma_qos -- sma/common.sh@10 -- # (( i < 5 )) 00:16:21.352 00:23:51 sma.sma_qos -- sma/common.sh@11 -- # nc -z 127.0.0.1 8080 00:16:21.352 00:23:51 sma.sma_qos -- sma/common.sh@12 -- # return 0 00:16:21.352 00:23:51 sma.sma_qos -- sma/qos.sh@60 -- # rpc_cmd bdev_null_create null0 100 4096 00:16:21.352 00:23:51 sma.sma_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.352 00:23:51 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:21.352 null0 00:16:21.352 00:23:51 sma.sma_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.352 00:23:51 sma.sma_qos -- sma/qos.sh@61 -- # rpc_cmd bdev_get_bdevs -b null0 00:16:21.352 00:23:51 sma.sma_qos -- sma/qos.sh@61 -- # jq -r '.[].uuid' 00:16:21.352 00:23:51 sma.sma_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.352 00:23:51 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:21.352 00:23:51 sma.sma_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.352 00:23:51 sma.sma_qos -- sma/qos.sh@61 -- # uuid=792f93bd-e298-4962-ac9f-cc56492ae630 00:16:21.352 00:23:51 sma.sma_qos -- sma/qos.sh@62 -- # create_device 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:21.353 00:23:51 sma.sma_qos -- sma/qos.sh@62 -- # jq -r .handle 00:16:21.353 00:23:51 sma.sma_qos -- sma/qos.sh@24 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:21.353 00:23:51 sma.sma_qos -- sma/qos.sh@24 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:21.353 00:23:51 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:21.623 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:21.623 I0000 00:00:1728426232.082478 2127932 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:21.623 I0000 00:00:1728426232.084044 2127932 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:21.623 I0000 00:00:1728426232.085761 2127935 subchannel.cc:806] subchannel 0x55e56a26f220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55e56a180670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55e56a20ecc0, grpc.internal.client_channel_call_destination=0x7f02d52f8390, grpc.internal.event_engine=0x55e56a11e190, grpc.internal.security_connector=0x55e56a1286e0, grpc.internal.subchannel_pool=0x55e56a296cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55e56a0655c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:52.0847476+02:00"}), backing off for 1000 ms 00:16:21.623 [2024-10-09 00:23:52.112856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:16:21.623 00:23:52 sma.sma_qos -- sma/qos.sh@62 -- # device=nvmf-tcp:nqn.2016-06.io.spdk:cnode0 00:16:21.623 00:23:52 sma.sma_qos -- sma/qos.sh@65 -- # diff /dev/fd/62 /dev/fd/61 00:16:21.623 00:23:52 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys 00:16:21.623 00:23:52 sma.sma_qos -- sma/qos.sh@65 -- # get_qos_caps 3 00:16:21.623 00:23:52 sma.sma_qos -- sma/qos.sh@65 -- # jq --sort-keys 00:16:21.623 00:23:52 sma.sma_qos -- sma/common.sh@45 -- # local rootdir 00:16:21.623 00:23:52 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:16:21.623 00:23:52 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../.. 00:16:21.623 00:23:52 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py 00:16:21.879 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:21.879 I0000 00:00:1728426232.345120 2127966 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:21.879 I0000 00:00:1728426232.346820 2127966 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:21.879 I0000 00:00:1728426232.348468 2127976 subchannel.cc:806] subchannel 0x55733753ef70 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55733740d070, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x557337311320, grpc.internal.client_channel_call_destination=0x7fa40c800390, grpc.internal.event_engine=0x55733738fea0, grpc.internal.security_connector=0x5573373284b0, grpc.internal.subchannel_pool=0x5573373c6620, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5573371f9df0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:52.347442598+02:00"}), backing off for 1000 ms 00:16:21.879 00:23:52 sma.sma_qos -- sma/qos.sh@79 -- # NOT get_qos_caps 1234 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg get_qos_caps 1234 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=get_qos_caps 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t get_qos_caps 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.879 00:23:52 sma.sma_qos -- common/autotest_common.sh@653 -- # get_qos_caps 1234 00:16:21.879 00:23:52 sma.sma_qos -- sma/common.sh@45 -- # local rootdir 00:16:21.880 00:23:52 sma.sma_qos -- sma/common.sh@47 -- # dirname /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/common.sh 00:16:21.880 00:23:52 sma.sma_qos -- sma/common.sh@47 -- # rootdir=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../.. 00:16:21.880 00:23:52 sma.sma_qos -- sma/common.sh@49 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py 00:16:22.137 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:22.137 I0000 00:00:1728426232.559649 2128000 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:22.137 I0000 00:00:1728426232.561109 2128000 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:22.137 I0000 00:00:1728426232.562814 2128003 subchannel.cc:806] subchannel 0x5636f3228f70 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5636f30f7070, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5636f2ffb320, grpc.internal.client_channel_call_destination=0x7f21f9f85390, grpc.internal.event_engine=0x5636f3079ea0, grpc.internal.security_connector=0x5636f30124b0, grpc.internal.subchannel_pool=0x5636f30b0620, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5636f2ee3df0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:52.561791004+02:00"}), backing off for 1000 ms 00:16:22.137 Traceback (most recent call last): 00:16:22.137 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 74, in 00:16:22.137 main(sys.argv[1:]) 00:16:22.137 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 69, in main 00:16:22.137 result = client.call(request['method'], request.get('params', {})) 00:16:22.137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:22.137 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/test/sma/../../scripts/sma-client.py", line 43, in call 00:16:22.137 response = func(request=json_format.ParseDict(params, input())) 00:16:22.137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:22.137 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:22.137 return _end_unary_response_blocking(state, call, False, None) 00:16:22.137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:22.137 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:22.137 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:22.137 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:22.137 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:22.137 status = StatusCode.INVALID_ARGUMENT 00:16:22.137 details = "Invalid device type" 00:16:22.137 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:52.564388351+02:00", grpc_status:3, grpc_message:"Invalid device type"}" 00:16:22.137 > 00:16:22.137 00:23:52 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:22.137 00:23:52 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.137 00:23:52 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.137 00:23:52 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.137 00:23:52 sma.sma_qos -- sma/qos.sh@82 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:22.137 00:23:52 sma.sma_qos -- sma/qos.sh@82 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:22.137 00:23:52 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:22.395 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:22.395 I0000 00:00:1728426232.801870 2128028 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:22.395 I0000 00:00:1728426232.803539 2128028 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:22.395 I0000 00:00:1728426232.805231 2128146 subchannel.cc:806] subchannel 0x55ad44c72220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55ad44b83670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55ad44c11cc0, grpc.internal.client_channel_call_destination=0x7f06e4383390, grpc.internal.event_engine=0x55ad44b21190, grpc.internal.security_connector=0x55ad44b2b6e0, grpc.internal.subchannel_pool=0x55ad44c99cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55ad44a685c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:52.804228218+02:00"}), backing off for 1000 ms 00:16:22.395 {} 00:16:22.395 00:23:52 sma.sma_qos -- sma/qos.sh@94 -- # diff /dev/fd/62 /dev/fd/61 00:16:22.395 00:23:52 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys 00:16:22.395 00:23:52 sma.sma_qos -- sma/qos.sh@94 -- # rpc_cmd bdev_get_bdevs -b null0 00:16:22.395 00:23:52 sma.sma_qos -- sma/qos.sh@94 -- # jq --sort-keys '.[].assigned_rate_limits' 00:16:22.395 00:23:52 sma.sma_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.395 00:23:52 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:22.395 00:23:52 sma.sma_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.395 00:23:52 sma.sma_qos -- sma/qos.sh@106 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:22.395 00:23:52 sma.sma_qos -- sma/qos.sh@106 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:22.395 00:23:52 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:22.654 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:22.654 I0000 00:00:1728426233.116422 2128228 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:22.654 I0000 00:00:1728426233.117885 2128228 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:22.654 I0000 00:00:1728426233.119588 2128261 subchannel.cc:806] subchannel 0x55deb02f3220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55deb0204670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55deb0292cc0, grpc.internal.client_channel_call_destination=0x7f3e2c655390, grpc.internal.event_engine=0x55deb01a2190, grpc.internal.security_connector=0x55deb01ac6e0, grpc.internal.subchannel_pool=0x55deb031acc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55deb00e95c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:53.118570913+02:00"}), backing off for 1000 ms 00:16:22.654 {} 00:16:22.654 00:23:53 sma.sma_qos -- sma/qos.sh@119 -- # diff /dev/fd/62 /dev/fd/61 00:16:22.654 00:23:53 sma.sma_qos -- sma/qos.sh@119 -- # rpc_cmd bdev_get_bdevs -b null0 00:16:22.654 00:23:53 sma.sma_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.654 00:23:53 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:22.654 00:23:53 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys 00:16:22.654 00:23:53 sma.sma_qos -- sma/qos.sh@119 -- # jq --sort-keys '.[].assigned_rate_limits' 00:16:22.654 00:23:53 sma.sma_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.654 00:23:53 sma.sma_qos -- sma/qos.sh@131 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:22.654 00:23:53 sma.sma_qos -- sma/qos.sh@131 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:22.654 00:23:53 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:22.911 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:22.911 I0000 00:00:1728426233.428875 2128287 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:22.911 I0000 00:00:1728426233.430555 2128287 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:22.911 I0000 00:00:1728426233.432290 2128303 subchannel.cc:806] subchannel 0x5612af854220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x5612af765670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5612af7f3cc0, grpc.internal.client_channel_call_destination=0x7f3c65ab7390, grpc.internal.event_engine=0x5612af703190, grpc.internal.security_connector=0x5612af70d6e0, grpc.internal.subchannel_pool=0x5612af87bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5612af64a5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:53.431271299+02:00"}), backing off for 1000 ms 00:16:22.911 {} 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@145 -- # diff /dev/fd/62 /dev/fd/61 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@145 -- # rpc_cmd bdev_get_bdevs -b null0 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@145 -- # jq --sort-keys '.[].assigned_rate_limits' 00:16:22.911 00:23:53 sma.sma_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.911 00:23:53 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:22.911 00:23:53 sma.sma_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@157 -- # unsupported_max_limits=(rd_iops wr_iops) 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}" 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:22.911 00:23:53 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:22.911 00:23:53 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:16:23.168 00:23:53 sma.sma_qos -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.168 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:23.168 I0000 00:00:1728426233.734615 2128333 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:23.168 I0000 00:00:1728426233.736126 2128333 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:23.168 I0000 00:00:1728426233.737837 2128334 subchannel.cc:806] subchannel 0x55cbc145a220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55cbc136b670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55cbc13f9cc0, grpc.internal.client_channel_call_destination=0x7f313a092390, grpc.internal.event_engine=0x55cbc1309190, grpc.internal.security_connector=0x55cbc13136e0, grpc.internal.subchannel_pool=0x55cbc1481cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55cbc12505c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:53.736814181+02:00"}), backing off for 1000 ms 00:16:23.168 Traceback (most recent call last): 00:16:23.168 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:23.168 main(sys.argv[1:]) 00:16:23.168 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:23.168 result = client.call(request['method'], request.get('params', {})) 00:16:23.168 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.168 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:23.168 response = func(request=json_format.ParseDict(params, input())) 00:16:23.168 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.168 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:23.168 return _end_unary_response_blocking(state, call, False, None) 00:16:23.168 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.169 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:23.169 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:23.169 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.169 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:23.169 status = StatusCode.INVALID_ARGUMENT 00:16:23.169 details = "Unsupported QoS limit: maximum.rd_iops" 00:16:23.169 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Unsupported QoS limit: maximum.rd_iops", grpc_status:3, created_time:"2024-10-09T00:23:53.753716565+02:00"}" 00:16:23.169 > 00:16:23.169 00:23:53 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:23.169 00:23:53 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.169 00:23:53 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.169 00:23:53 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.169 00:23:53 sma.sma_qos -- sma/qos.sh@159 -- # for limit in "${unsupported_max_limits[@]}" 00:16:23.169 00:23:53 sma.sma_qos -- sma/qos.sh@160 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.169 00:23:53 sma.sma_qos -- sma/qos.sh@160 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:23.169 00:23:53 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:16:23.426 00:23:53 sma.sma_qos -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:23.426 I0000 00:00:1728426234.003044 2128358 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:23.426 I0000 00:00:1728426234.004643 2128358 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:23.426 I0000 00:00:1728426234.006418 2128372 subchannel.cc:806] subchannel 0x55a1d49c5220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55a1d48d6670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55a1d4964cc0, grpc.internal.client_channel_call_destination=0x7f38e9a5c390, grpc.internal.event_engine=0x55a1d4874190, grpc.internal.security_connector=0x55a1d487e6e0, grpc.internal.subchannel_pool=0x55a1d49eccc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55a1d47bb5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:54.005392539+02:00"}), backing off for 1000 ms 00:16:23.426 Traceback (most recent call last): 00:16:23.426 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:23.426 main(sys.argv[1:]) 00:16:23.426 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:23.426 result = client.call(request['method'], request.get('params', {})) 00:16:23.426 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.426 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:23.426 response = func(request=json_format.ParseDict(params, input())) 00:16:23.426 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.426 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:23.426 return _end_unary_response_blocking(state, call, False, None) 00:16:23.426 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.426 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:23.426 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:23.426 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.426 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:23.426 status = StatusCode.INVALID_ARGUMENT 00:16:23.426 details = "Unsupported QoS limit: maximum.wr_iops" 00:16:23.426 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {grpc_message:"Unsupported QoS limit: maximum.wr_iops", grpc_status:3, created_time:"2024-10-09T00:23:54.022405392+02:00"}" 00:16:23.426 > 00:16:23.426 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:23.426 00:23:54 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.426 00:23:54 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.426 00:23:54 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.426 00:23:54 sma.sma_qos -- sma/qos.sh@178 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.426 00:23:54 sma.sma_qos -- sma/qos.sh@178 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:23.427 00:23:54 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:23.684 I0000 00:00:1728426234.268298 2128446 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:23.684 I0000 00:00:1728426234.270010 2128446 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:23.684 I0000 00:00:1728426234.271746 2128549 subchannel.cc:806] subchannel 0x557570403220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x557570314670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x5575703a2cc0, grpc.internal.client_channel_call_destination=0x7f8ae9a12390, grpc.internal.event_engine=0x5575702b2190, grpc.internal.security_connector=0x5575702bc6e0, grpc.internal.subchannel_pool=0x55757042acc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x5575701f95c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:54.270721898+02:00"}), backing off for 1000 ms 00:16:23.684 [2024-10-09 00:23:54.282890] nvmf_rpc.c: 294:rpc_nvmf_get_subsystems: *ERROR*: subsystem 'nqn.2016-06.io.spdk:cnode0-invalid' does not exist 00:16:23.684 Traceback (most recent call last): 00:16:23.684 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:23.684 main(sys.argv[1:]) 00:16:23.684 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:23.684 result = client.call(request['method'], request.get('params', {})) 00:16:23.684 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.684 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:23.684 response = func(request=json_format.ParseDict(params, input())) 00:16:23.684 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.684 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:23.684 return _end_unary_response_blocking(state, call, False, None) 00:16:23.684 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.684 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:23.684 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:23.684 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.684 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:23.684 status = StatusCode.NOT_FOUND 00:16:23.684 details = "No device associated with device_handle could be found" 00:16:23.684 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:54.287176936+02:00", grpc_status:5, grpc_message:"No device associated with device_handle could be found"}" 00:16:23.684 > 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.684 00:23:54 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.684 00:23:54 sma.sma_qos -- sma/qos.sh@191 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.684 00:23:54 sma.sma_qos -- sma/qos.sh@191 -- # uuidgen 00:16:23.941 00:23:54 sma.sma_qos -- sma/qos.sh@191 -- # uuid2base64 c1cb0510-b9de-4759-85af-d7d6114b0fdb 00:16:23.941 00:23:54 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:16:23.941 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:23.941 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:23.941 I0000 00:00:1728426234.535204 2128621 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:23.941 I0000 00:00:1728426234.536891 2128621 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:23.941 I0000 00:00:1728426234.538602 2128629 subchannel.cc:806] subchannel 0x562cf2eb9220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562cf2dca670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562cf2e58cc0, grpc.internal.client_channel_call_destination=0x7f44de17e390, grpc.internal.event_engine=0x562cf2d68190, grpc.internal.security_connector=0x562cf2d726e0, grpc.internal.subchannel_pool=0x562cf2ee0cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562cf2caf5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:54.537586611+02:00"}), backing off for 1000 ms 00:16:23.941 [2024-10-09 00:23:54.543652] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: c1cb0510-b9de-4759-85af-d7d6114b0fdb 00:16:23.941 Traceback (most recent call last): 00:16:23.941 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:23.941 main(sys.argv[1:]) 00:16:23.941 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:23.941 result = client.call(request['method'], request.get('params', {})) 00:16:23.941 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.941 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:23.941 response = func(request=json_format.ParseDict(params, input())) 00:16:23.941 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.941 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:23.941 return _end_unary_response_blocking(state, call, False, None) 00:16:23.941 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.941 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:23.941 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:23.941 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:23.941 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:23.941 status = StatusCode.NOT_FOUND 00:16:23.941 details = "No volume associated with volume_id could be found" 00:16:23.941 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:54.547899083+02:00", grpc_status:5, grpc_message:"No volume associated with volume_id could be found"}" 00:16:23.941 > 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.199 00:23:54 sma.sma_qos -- sma/qos.sh@205 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:24.199 I0000 00:00:1728426234.763098 2128651 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:24.199 I0000 00:00:1728426234.764606 2128651 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:24.199 I0000 00:00:1728426234.766274 2128652 subchannel.cc:806] subchannel 0x55bbfdc61220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x55bbfdb72670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x55bbfdc00cc0, grpc.internal.client_channel_call_destination=0x7f972911b390, grpc.internal.event_engine=0x55bbfdb8e360, grpc.internal.security_connector=0x55bbfdb1a6e0, grpc.internal.subchannel_pool=0x55bbfdc88cc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x55bbfda575c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:54.765250261+02:00"}), backing off for 1000 ms 00:16:24.199 Traceback (most recent call last): 00:16:24.199 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:24.199 main(sys.argv[1:]) 00:16:24.199 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:24.199 result = client.call(request['method'], request.get('params', {})) 00:16:24.199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.199 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:24.199 response = func(request=json_format.ParseDict(params, input())) 00:16:24.199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.199 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:24.199 return _end_unary_response_blocking(state, call, False, None) 00:16:24.199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.199 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:24.199 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:24.199 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.199 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:24.199 status = StatusCode.INVALID_ARGUMENT 00:16:24.199 details = "Invalid volume ID" 00:16:24.199 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:54.767734545+02:00", grpc_status:3, grpc_message:"Invalid volume ID"}" 00:16:24.199 > 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.199 00:23:54 sma.sma_qos -- sma/qos.sh@217 -- # NOT /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- sma/qos.sh@217 -- # uuid2base64 792f93bd-e298-4962-ac9f-cc56492ae630 00:16:24.199 00:23:54 sma.sma_qos -- sma/common.sh@20 -- # python 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@650 -- # local es=0 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py ]] 00:16:24.199 00:23:54 sma.sma_qos -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py 00:16:24.457 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR 00:16:24.457 I0000 00:00:1728426235.014474 2128676 config.cc:230] gRPC experiments enabled: call_status_override_on_cancellation, event_engine_dns, event_engine_listener, http2_stats_fix, monitoring_experiment, pick_first_new, trace_record_callops, work_serializer_clears_time_cache 00:16:24.457 I0000 00:00:1728426235.015990 2128676 http_proxy_mapper.cc:252] not using proxy for host in no_proxy list 'dns:///localhost:8080' 00:16:24.457 I0000 00:00:1728426235.017691 2128681 subchannel.cc:806] subchannel 0x562e634e4220 {address=ipv6:%5B::1%5D:8080, args={grpc.client_channel_factory=0x562e633f5670, grpc.default_authority=localhost:8080, grpc.internal.channel_credentials=0x562e63483cc0, grpc.internal.client_channel_call_destination=0x7f3ff081e390, grpc.internal.event_engine=0x562e63393190, grpc.internal.security_connector=0x562e6339d6e0, grpc.internal.subchannel_pool=0x562e6350bcc0, grpc.primary_user_agent=grpc-python/1.65.1, grpc.resource_quota=0x562e632da5c0, grpc.server_uri=dns:///localhost:8080}}: connect failed (UNKNOWN:Failed to connect to remote host: connect: Connection refused (111) {created_time:"2024-10-09T00:23:55.016644157+02:00"}), backing off for 1000 ms 00:16:24.457 Traceback (most recent call last): 00:16:24.457 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 74, in 00:16:24.457 main(sys.argv[1:]) 00:16:24.457 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 69, in main 00:16:24.457 result = client.call(request['method'], request.get('params', {})) 00:16:24.457 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.457 File "/var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/sma-client.py", line 43, in call 00:16:24.457 response = func(request=json_format.ParseDict(params, input())) 00:16:24.457 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.457 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1181, in __call__ 00:16:24.457 return _end_unary_response_blocking(state, call, False, None) 00:16:24.457 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.457 File "/usr/local/lib64/python3.12/site-packages/grpc/_channel.py", line 1006, in _end_unary_response_blocking 00:16:24.457 raise _InactiveRpcError(state) # pytype: disable=not-instantiable 00:16:24.457 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 00:16:24.457 grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: 00:16:24.457 status = StatusCode.NOT_FOUND 00:16:24.457 details = "Invalid device handle" 00:16:24.457 debug_error_string = "UNKNOWN:Error received from peer ipv4:127.0.0.1:8080 {created_time:"2024-10-09T00:23:55.019199326+02:00", grpc_status:5, grpc_message:"Invalid device handle"}" 00:16:24.457 > 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@653 -- # es=1 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@230 -- # diff /dev/fd/62 /dev/fd/61 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@230 -- # rpc_cmd bdev_get_bdevs -b null0 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@230 -- # jq --sort-keys '.[].assigned_rate_limits' 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@241 -- # trap - SIGINT SIGTERM EXIT 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@242 -- # cleanup 00:16:24.457 00:23:55 sma.sma_qos -- sma/qos.sh@19 -- # killprocess 2127451 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@950 -- # '[' -z 2127451 ']' 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@954 -- # kill -0 2127451 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@955 -- # uname 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.457 00:23:55 sma.sma_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2127451 00:16:24.714 00:23:55 sma.sma_qos -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.714 00:23:55 sma.sma_qos -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.714 00:23:55 sma.sma_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2127451' 00:16:24.714 killing process with pid 2127451 00:16:24.714 00:23:55 sma.sma_qos -- common/autotest_common.sh@969 -- # kill 2127451 00:16:24.714 00:23:55 sma.sma_qos -- common/autotest_common.sh@974 -- # wait 2127451 00:16:27.253 00:23:57 sma.sma_qos -- sma/qos.sh@20 -- # killprocess 2127452 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@950 -- # '[' -z 2127452 ']' 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@954 -- # kill -0 2127452 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@955 -- # uname 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2127452 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@956 -- # process_name=python3 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@960 -- # '[' python3 = sudo ']' 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2127452' 00:16:27.253 killing process with pid 2127452 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@969 -- # kill 2127452 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@974 -- # wait 2127452 00:16:27.253 00:16:27.253 real 0m8.104s 00:16:27.253 user 0m10.704s 00:16:27.253 sys 0m1.179s 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.253 00:23:57 sma.sma_qos -- common/autotest_common.sh@10 -- # set +x 00:16:27.253 ************************************ 00:16:27.253 END TEST sma_qos 00:16:27.253 ************************************ 00:16:27.253 00:16:27.253 real 3m33.616s 00:16:27.253 user 6m6.588s 00:16:27.253 sys 0m21.475s 00:16:27.253 00:23:57 sma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.253 00:23:57 sma -- common/autotest_common.sh@10 -- # set +x 00:16:27.253 ************************************ 00:16:27.253 END TEST sma 00:16:27.253 ************************************ 00:16:27.253 00:23:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:27.253 00:23:57 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:16:27.253 00:23:57 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:16:27.253 00:23:57 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:16:27.253 00:23:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:27.253 00:23:57 -- common/autotest_common.sh@10 -- # set +x 00:16:27.253 00:23:57 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:16:27.253 00:23:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:16:27.253 00:23:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:16:27.253 00:23:57 -- common/autotest_common.sh@10 -- # set +x 00:16:32.527 INFO: APP EXITING 00:16:32.527 INFO: killing all VMs 00:16:32.527 INFO: killing vhost app 00:16:32.527 INFO: EXIT DONE 00:16:34.449 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:16:34.449 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:16:34.449 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:16:36.988 Cleaning 00:16:36.988 Removing: /dev/shm/spdk_tgt_trace.pid1996829 00:16:36.988 Removing: /var/run/dpdk/spdk_pid1992940 00:16:36.988 Removing: /var/run/dpdk/spdk_pid1994432 00:16:36.988 Removing: /var/run/dpdk/spdk_pid1996829 00:16:36.988 Removing: /var/run/dpdk/spdk_pid1997902 00:16:36.988 Removing: /var/run/dpdk/spdk_pid1999192 00:16:36.988 Removing: /var/run/dpdk/spdk_pid1999864 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2001650 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2001877 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2002724 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2003504 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2004318 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2005164 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2005905 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2006161 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2006407 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2006903 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2008072 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2011447 00:16:36.988 Removing: /var/run/dpdk/spdk_pid2012147 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2012847 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2013019 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2014802 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2014931 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2016746 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2016972 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2017677 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2017903 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2018388 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2018620 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2020283 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2020535 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2020888 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2023150 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2034424 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2044801 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2057703 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2070896 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2071365 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2078042 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2088290 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2094233 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2100295 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2104026 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2104027 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2104028 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2118897 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2122779 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2123105 00:16:36.989 Removing: /var/run/dpdk/spdk_pid2127451 00:16:36.989 Clean 00:16:36.989 00:24:07 -- common/autotest_common.sh@1451 -- # return 0 00:16:36.989 00:24:07 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:16:36.989 00:24:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.989 00:24:07 -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 00:24:07 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:16:36.989 00:24:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:36.989 00:24:07 -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 00:24:07 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt 00:16:36.990 00:24:07 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log ]] 00:16:36.990 00:24:07 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/udev.log 00:16:36.990 00:24:07 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:16:37.255 00:24:07 -- spdk/autotest.sh@394 -- # hostname 00:16:37.255 00:24:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info 00:16:37.255 geninfo: WARNING: invalid characters removed from testname! 00:16:59.205 00:24:26 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info 00:16:59.205 00:24:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info 00:17:01.107 00:24:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info 00:17:02.488 00:24:33 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info 00:17:04.402 00:24:34 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info 00:17:06.300 00:24:36 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/cov_total.info 00:17:07.674 00:24:38 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:07.933 00:24:38 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:17:07.933 00:24:38 -- common/autotest_common.sh@1681 -- $ lcov --version 00:17:07.933 00:24:38 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:17:07.933 00:24:38 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:17:07.933 00:24:38 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:07.933 00:24:38 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:07.933 00:24:38 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:07.933 00:24:38 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:07.933 00:24:38 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:07.933 00:24:38 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:07.933 00:24:38 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:07.933 00:24:38 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:07.933 00:24:38 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:07.933 00:24:38 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:07.933 00:24:38 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:07.933 00:24:38 -- scripts/common.sh@344 -- $ case "$op" in 00:17:07.933 00:24:38 -- scripts/common.sh@345 -- $ : 1 00:17:07.933 00:24:38 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:07.933 00:24:38 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.933 00:24:38 -- scripts/common.sh@365 -- $ decimal 1 00:17:07.933 00:24:38 -- scripts/common.sh@353 -- $ local d=1 00:17:07.933 00:24:38 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:07.933 00:24:38 -- scripts/common.sh@355 -- $ echo 1 00:17:07.933 00:24:38 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:07.933 00:24:38 -- scripts/common.sh@366 -- $ decimal 2 00:17:07.933 00:24:38 -- scripts/common.sh@353 -- $ local d=2 00:17:07.933 00:24:38 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:07.933 00:24:38 -- scripts/common.sh@355 -- $ echo 2 00:17:07.933 00:24:38 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:07.933 00:24:38 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:07.933 00:24:38 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:07.933 00:24:38 -- scripts/common.sh@368 -- $ return 0 00:17:07.933 00:24:38 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.933 00:24:38 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:17:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.933 --rc genhtml_branch_coverage=1 00:17:07.933 --rc genhtml_function_coverage=1 00:17:07.933 --rc genhtml_legend=1 00:17:07.933 --rc geninfo_all_blocks=1 00:17:07.933 --rc geninfo_unexecuted_blocks=1 00:17:07.933 00:17:07.933 ' 00:17:07.933 00:24:38 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:17:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.933 --rc genhtml_branch_coverage=1 00:17:07.933 --rc genhtml_function_coverage=1 00:17:07.933 --rc genhtml_legend=1 00:17:07.933 --rc geninfo_all_blocks=1 00:17:07.933 --rc geninfo_unexecuted_blocks=1 00:17:07.933 00:17:07.933 ' 00:17:07.933 00:24:38 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:17:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.933 --rc genhtml_branch_coverage=1 00:17:07.933 --rc genhtml_function_coverage=1 00:17:07.933 --rc genhtml_legend=1 00:17:07.933 --rc geninfo_all_blocks=1 00:17:07.933 --rc geninfo_unexecuted_blocks=1 00:17:07.933 00:17:07.933 ' 00:17:07.933 00:24:38 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:17:07.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.933 --rc genhtml_branch_coverage=1 00:17:07.933 --rc genhtml_function_coverage=1 00:17:07.933 --rc genhtml_legend=1 00:17:07.933 --rc geninfo_all_blocks=1 00:17:07.933 --rc geninfo_unexecuted_blocks=1 00:17:07.933 00:17:07.933 ' 00:17:07.933 00:24:38 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/common.sh 00:17:07.933 00:24:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:07.933 00:24:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:07.933 00:24:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.933 00:24:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.933 00:24:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.933 00:24:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.933 00:24:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.933 00:24:38 -- paths/export.sh@5 -- $ export PATH 00:17:07.933 00:24:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.933 00:24:38 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output 00:17:07.933 00:24:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:17:07.933 00:24:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728426278.XXXXXX 00:17:07.933 00:24:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728426278.5T36Ye 00:17:07.933 00:24:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:17:07.933 00:24:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:17:07.933 00:24:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/' 00:17:07.933 00:24:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp' 00:17:07.933 00:24:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/vfio-user-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:17:07.933 00:24:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:17:07.933 00:24:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:07.933 00:24:38 -- common/autotest_common.sh@10 -- $ set +x 00:17:07.933 00:24:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-sma --with-crypto' 00:17:07.933 00:24:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:17:07.933 00:24:38 -- pm/common@17 -- $ local monitor 00:17:07.933 00:24:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:07.933 00:24:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:07.933 00:24:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:07.933 00:24:38 -- pm/common@21 -- $ date +%s 00:17:07.933 00:24:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:07.933 00:24:38 -- pm/common@21 -- $ date +%s 00:17:07.933 00:24:38 -- pm/common@25 -- $ sleep 1 00:17:07.933 00:24:38 -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728426278 00:17:07.933 00:24:38 -- pm/common@21 -- $ date +%s 00:17:07.933 00:24:38 -- pm/common@21 -- $ date +%s 00:17:07.933 00:24:38 -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728426278 00:17:07.933 00:24:38 -- pm/common@21 -- $ /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728426278 00:17:07.933 00:24:38 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/vfio-user-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728426278 00:17:07.933 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728426278_collect-cpu-load.pm.log 00:17:07.933 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728426278_collect-vmstat.pm.log 00:17:07.933 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728426278_collect-cpu-temp.pm.log 00:17:07.933 Redirecting to /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728426278_collect-bmc-pm.bmc.pm.log 00:17:08.868 00:24:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:17:08.868 00:24:39 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:08.868 00:24:39 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:08.868 00:24:39 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:08.868 00:24:39 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:08.868 00:24:39 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/timing.txt 00:17:09.127 00:24:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:09.127 00:24:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:09.127 00:24:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:09.127 00:24:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:09.127 00:24:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:17:09.127 00:24:39 -- pm/common@44 -- $ pid=2137003 00:17:09.127 00:24:39 -- pm/common@50 -- $ kill -TERM 2137003 00:17:09.127 00:24:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:09.127 00:24:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:17:09.127 00:24:39 -- pm/common@44 -- $ pid=2137005 00:17:09.127 00:24:39 -- pm/common@50 -- $ kill -TERM 2137005 00:17:09.127 00:24:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:09.127 00:24:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:17:09.127 00:24:39 -- pm/common@44 -- $ pid=2137007 00:17:09.127 00:24:39 -- pm/common@50 -- $ kill -TERM 2137007 00:17:09.127 00:24:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:09.127 00:24:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:17:09.127 00:24:39 -- pm/common@44 -- $ pid=2137035 00:17:09.127 00:24:39 -- pm/common@50 -- $ sudo -E kill -TERM 2137035 00:17:09.127 + [[ -n 1909558 ]] 00:17:09.127 + sudo kill 1909558 00:17:09.136 [Pipeline] } 00:17:09.151 [Pipeline] // stage 00:17:09.156 [Pipeline] } 00:17:09.171 [Pipeline] // timeout 00:17:09.176 [Pipeline] } 00:17:09.189 [Pipeline] // catchError 00:17:09.195 [Pipeline] } 00:17:09.210 [Pipeline] // wrap 00:17:09.216 [Pipeline] } 00:17:09.236 [Pipeline] // catchError 00:17:09.245 [Pipeline] stage 00:17:09.247 [Pipeline] { (Epilogue) 00:17:09.260 [Pipeline] catchError 00:17:09.261 [Pipeline] { 00:17:09.275 [Pipeline] echo 00:17:09.277 Cleanup processes 00:17:09.283 [Pipeline] sh 00:17:09.564 + sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:17:09.564 2137167 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/vfio-user-phy-autotest/spdk/../output/power/sdr.cache 00:17:09.564 2137511 sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:17:09.576 [Pipeline] sh 00:17:09.855 ++ sudo pgrep -af /var/jenkins/workspace/vfio-user-phy-autotest/spdk 00:17:09.855 ++ grep -v 'sudo pgrep' 00:17:09.855 ++ awk '{print $1}' 00:17:09.855 + sudo kill -9 2137167 00:17:09.864 [Pipeline] sh 00:17:10.139 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:18.381 [Pipeline] sh 00:17:18.662 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:18.662 Artifacts sizes are good 00:17:18.674 [Pipeline] archiveArtifacts 00:17:18.681 Archiving artifacts 00:17:18.778 [Pipeline] sh 00:17:19.058 + sudo chown -R sys_sgci: /var/jenkins/workspace/vfio-user-phy-autotest 00:17:19.073 [Pipeline] cleanWs 00:17:19.083 [WS-CLEANUP] Deleting project workspace... 00:17:19.083 [WS-CLEANUP] Deferred wipeout is used... 00:17:19.089 [WS-CLEANUP] done 00:17:19.091 [Pipeline] } 00:17:19.107 [Pipeline] // catchError 00:17:19.119 [Pipeline] sh 00:17:19.402 + logger -p user.info -t JENKINS-CI 00:17:19.410 [Pipeline] } 00:17:19.422 [Pipeline] // stage 00:17:19.427 [Pipeline] } 00:17:19.439 [Pipeline] // node 00:17:19.444 [Pipeline] End of Pipeline 00:17:19.497 Finished: SUCCESS