00:00:00.001 Started by upstream project "autotest-per-patch" build number 126169 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.181 Using shallow fetch with depth 1 00:00:00.181 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.181 > git --version # timeout=10 00:00:00.239 > git --version # 'git version 2.39.2' 00:00:00.239 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.288 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.288 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.857 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.870 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.882 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.882 > git config core.sparsecheckout # timeout=10 00:00:04.895 > git read-tree -mu HEAD # timeout=10 00:00:04.923 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.963 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.963 > git rev-list --no-walk b0ebb039b16703d64cc7534b6e0fa0780ed1e683 # timeout=10 00:00:05.089 [Pipeline] Start of Pipeline 00:00:05.104 [Pipeline] library 00:00:05.107 Loading library shm_lib@master 00:00:05.108 Library shm_lib@master is cached. Copying from home. 00:00:05.126 [Pipeline] node 00:00:20.127 Still waiting to schedule task 00:00:20.128 ‘CYP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘CYP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘CYP7’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘CYP8’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP03’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP04’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP07’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP08’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP09’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘FCP12’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP10’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP13’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP14’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP16’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP18’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP19’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP20’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP21’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP22’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP3’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP4’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘GP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘Jenkins’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘ME1’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘ME2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘ME3’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘PE5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM10’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM13’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM1’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM25’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM26’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM27’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM28’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM29’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM30’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM31’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM32’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM33’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM34’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM35’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM6’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM7’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘SM8’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘VM-host-PE1’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘VM-host-PE2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘VM-host-PE3’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘VM-host-PE4’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.128 ‘VM-host-SM18’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘VM-host-WFP1’ is offline 00:00:20.129 ‘VM-host-WFP25’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WCP0’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WCP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WCP5’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP11’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP15’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP17’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP28’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP2’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP31’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP32’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP33’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP34’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP35’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP36’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP37’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP38’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP47’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP49’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP63’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP65’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP66’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP67’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP68’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP69’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘WFP9’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘ipxe-staging’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘prc_bsc_waikikibeach64’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘spdk-pxe-01’ doesn’t have label ‘vagrant-vm-host’ 00:00:20.129 ‘spdk-pxe-02’ doesn’t have label ‘vagrant-vm-host’ 00:11:53.048 Running on VM-host-WFP1 in /var/jenkins/workspace/iscsi-uring-vg-autotest 00:11:53.050 [Pipeline] { 00:11:53.064 [Pipeline] catchError 00:11:53.066 [Pipeline] { 00:11:53.078 [Pipeline] wrap 00:11:53.088 [Pipeline] { 00:11:53.095 [Pipeline] stage 00:11:53.097 [Pipeline] { (Prologue) 00:11:53.118 [Pipeline] echo 00:11:53.120 Node: VM-host-WFP1 00:11:53.126 [Pipeline] cleanWs 00:11:53.137 [WS-CLEANUP] Deleting project workspace... 00:11:53.137 [WS-CLEANUP] Deferred wipeout is used... 00:11:53.146 [WS-CLEANUP] done 00:11:53.306 [Pipeline] setCustomBuildProperty 00:11:53.387 [Pipeline] httpRequest 00:11:54.275 [Pipeline] echo 00:11:54.276 Sorcerer 10.211.164.101 is alive 00:11:54.284 [Pipeline] httpRequest 00:11:54.290 HttpMethod: GET 00:11:54.291 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:54.293 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:54.306 Response Code: HTTP/1.1 200 OK 00:11:54.306 Success: Status code 200 is in the accepted range: 200,404 00:11:54.308 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:56.777 [Pipeline] sh 00:11:57.348 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:11:57.669 [Pipeline] httpRequest 00:11:57.699 [Pipeline] echo 00:11:57.700 Sorcerer 10.211.164.101 is alive 00:11:57.708 [Pipeline] httpRequest 00:11:57.715 HttpMethod: GET 00:11:57.715 URL: http://10.211.164.101/packages/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:11:57.716 Sending request to url: http://10.211.164.101/packages/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:11:57.719 Response Code: HTTP/1.1 200 OK 00:11:57.720 Success: Status code 200 is in the accepted range: 200,404 00:11:57.720 Saving response body to /var/jenkins/workspace/iscsi-uring-vg-autotest/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:12:12.439 [Pipeline] sh 00:12:12.740 + tar --no-same-owner -xf spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:12:15.369 [Pipeline] sh 00:12:15.697 + git -C spdk log --oneline -n5 00:12:15.697 62a72093c bdev: Add bdev_enable_histogram filter 00:12:15.697 719d03c6a sock/uring: only register net impl if supported 00:12:15.697 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:12:15.698 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:12:15.698 6c7c1f57e accel: add sequence outstanding stat 00:12:15.735 [Pipeline] writeFile 00:12:15.752 [Pipeline] sh 00:12:16.088 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:16.110 [Pipeline] sh 00:12:16.403 + cat autorun-spdk.conf 00:12:16.403 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:16.403 SPDK_TEST_ISCSI=1 00:12:16.403 SPDK_TEST_URING=1 00:12:16.403 SPDK_RUN_UBSAN=1 00:12:16.403 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:16.409 RUN_NIGHTLY=0 00:12:16.412 [Pipeline] } 00:12:16.429 [Pipeline] // stage 00:12:16.449 [Pipeline] stage 00:12:16.452 [Pipeline] { (Run VM) 00:12:16.469 [Pipeline] sh 00:12:16.776 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:16.776 + echo 'Start stage prepare_nvme.sh' 00:12:16.776 Start stage prepare_nvme.sh 00:12:16.776 + [[ -n 0 ]] 00:12:16.776 + disk_prefix=ex0 00:12:16.776 + [[ -n /var/jenkins/workspace/iscsi-uring-vg-autotest ]] 00:12:16.776 + [[ -e /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf ]] 00:12:16.776 + source /var/jenkins/workspace/iscsi-uring-vg-autotest/autorun-spdk.conf 00:12:16.776 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:16.776 ++ SPDK_TEST_ISCSI=1 00:12:16.776 ++ SPDK_TEST_URING=1 00:12:16.776 ++ SPDK_RUN_UBSAN=1 00:12:16.776 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:16.776 ++ RUN_NIGHTLY=0 00:12:16.776 + cd /var/jenkins/workspace/iscsi-uring-vg-autotest 00:12:16.776 + nvme_files=() 00:12:16.776 + declare -A nvme_files 00:12:16.776 + backend_dir=/var/lib/libvirt/images/backends 00:12:16.776 + nvme_files['nvme.img']=5G 00:12:16.776 + nvme_files['nvme-cmb.img']=5G 00:12:16.776 + nvme_files['nvme-multi0.img']=4G 00:12:16.776 + nvme_files['nvme-multi1.img']=4G 00:12:16.776 + nvme_files['nvme-multi2.img']=4G 00:12:16.776 + nvme_files['nvme-openstack.img']=8G 00:12:16.776 + nvme_files['nvme-zns.img']=5G 00:12:16.776 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:16.776 + (( SPDK_TEST_FTL == 1 )) 00:12:16.776 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:16.776 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:16.776 + for nvme in "${!nvme_files[@]}" 00:12:16.776 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:12:16.776 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:16.776 + for nvme in "${!nvme_files[@]}" 00:12:16.776 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:12:16.776 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:16.776 + for nvme in "${!nvme_files[@]}" 00:12:16.776 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:12:16.776 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:16.776 + for nvme in "${!nvme_files[@]}" 00:12:16.776 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:12:16.776 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:16.776 + for nvme in "${!nvme_files[@]}" 00:12:16.776 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:12:17.035 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:17.035 + for nvme in "${!nvme_files[@]}" 00:12:17.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:12:17.035 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:17.035 + for nvme in "${!nvme_files[@]}" 00:12:17.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:12:17.035 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:17.035 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:12:17.035 + for v in $(sudo grep -rl ${disk_prefix}-nvme.img /etc/libvirt/qemu) 00:12:17.035 ++ basename -s .xml /etc/libvirt/qemu/rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b.xml 00:12:17.035 + domain_name=rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b 00:12:17.035 ++ sudo virsh list --name --state-shutoff 00:12:17.035 + [[ centos7-7.8.2003-1711172311-2200_default_1720698439_8be70b3ae55b8fb2f04d 00:12:17.035 centos7_vm_3 00:12:17.035 centos7_vm_4 00:12:17.035 fedora38-38-1.6-1705279005-2131_default_1715162949_296c151262d07652d638 00:12:17.035 fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720697561_04e2a095208931077d11 00:12:17.035 fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720698440_9db3f812bce092bde37b 00:12:17.035 fedora_1G_vm_3 00:12:17.035 fedora_1G_vm_4 00:12:17.035 fedora_fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel 00:12:17.035 fedora_vm_10 00:12:17.035 fedora_vm_19 00:12:17.035 fedora_vm_20 00:12:17.035 fedora_vm_21 00:12:17.035 fedora_vm_5 00:12:17.035 fedora_vm_6 00:12:17.035 fedora_vm_7 00:12:17.035 fedora_vm_8 00:12:17.035 fedora_vm_9 00:12:17.035 fedora_vm_tcp_test_1 00:12:17.035 fedora_vm_tcp_test_2 00:12:17.035 fedora_vm_tcp_test_3 00:12:17.036 freebsd_vm_3 00:12:17.036 freebsd_vm_4 00:12:17.036 rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b 00:12:17.036 rocky9-9.0-1711172311-2200_default_1720697528_8ee4f9b5433359dca41b 00:12:17.036 ubuntu16_04_vm_3 00:12:17.036 ubuntu16_04_vm_4 00:12:17.036 ubuntu17_10_vm_3 00:12:17.036 ubuntu17_10_vm_4 00:12:17.036 ubuntu18_04_vm_1 00:12:17.036 ubuntu18_04_vm_2 00:12:17.036 ubuntu2204-22.04-1711172311-2200_default_1720698418_62f512f45f485928b5f4 != *rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b* ]] 00:12:17.036 + sudo virsh undefine --remove-all-storage rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b 00:12:17.036 Domain rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b has been undefined 00:12:17.036 Volume 'vda'(/var/lib/libvirt/images/rocky9-9.0-1711172311-2200_default_1720697054_26aa859becfd5443b55b.img) removed. 00:12:17.036 00:12:17.036 + echo 'End stage prepare_nvme.sh' 00:12:17.036 End stage prepare_nvme.sh 00:12:17.048 [Pipeline] sh 00:12:17.333 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:17.333 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:12:17.594 00:12:17.594 DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk/scripts/vagrant 00:12:17.594 SPDK_DIR=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk 00:12:17.594 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-uring-vg-autotest 00:12:17.594 HELP=0 00:12:17.594 DRY_RUN=0 00:12:17.594 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:12:17.594 NVME_DISKS_TYPE=nvme,nvme, 00:12:17.594 NVME_AUTO_CREATE=0 00:12:17.594 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:12:17.594 NVME_CMB=,, 00:12:17.594 NVME_PMR=,, 00:12:17.594 NVME_ZNS=,, 00:12:17.594 NVME_MS=,, 00:12:17.594 NVME_FDP=,, 00:12:17.594 SPDK_VAGRANT_DISTRO=fedora38 00:12:17.594 SPDK_VAGRANT_VMCPU=10 00:12:17.594 SPDK_VAGRANT_VMRAM=12288 00:12:17.594 SPDK_VAGRANT_PROVIDER=libvirt 00:12:17.594 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:12:17.594 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:17.594 SPDK_OPENSTACK_NETWORK=0 00:12:17.594 VAGRANT_PACKAGE_BOX=0 00:12:17.594 VAGRANTFILE=/var/jenkins/workspace/iscsi-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:12:17.594 FORCE_DISTRO=true 00:12:17.594 VAGRANT_BOX_VERSION= 00:12:17.594 EXTRA_VAGRANTFILES= 00:12:17.594 NIC_MODEL=e1000 00:12:17.594 00:12:17.594 mkdir: created directory '/var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt' 00:12:17.594 /var/jenkins/workspace/iscsi-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-uring-vg-autotest 00:12:21.799 Bringing machine 'default' up with 'libvirt' provider... 00:12:22.740 ==> default: Creating image (snapshot of base box volume). 00:12:23.005 ==> default: Creating domain with the following settings... 00:12:23.005 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721036324_b8275bd9348e24c8f7d0 00:12:23.005 ==> default: -- Domain type: kvm 00:12:23.005 ==> default: -- Cpus: 10 00:12:23.005 ==> default: -- Feature: acpi 00:12:23.005 ==> default: -- Feature: apic 00:12:23.005 ==> default: -- Feature: pae 00:12:23.005 ==> default: -- Memory: 12288M 00:12:23.005 ==> default: -- Memory Backing: hugepages: 00:12:23.005 ==> default: -- Management MAC: 00:12:23.005 ==> default: -- Loader: 00:12:23.005 ==> default: -- Nvram: 00:12:23.005 ==> default: -- Base box: spdk/fedora38 00:12:23.005 ==> default: -- Storage pool: default 00:12:23.005 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721036324_b8275bd9348e24c8f7d0.img (20G) 00:12:23.005 ==> default: -- Volume Cache: default 00:12:23.005 ==> default: -- Kernel: 00:12:23.005 ==> default: -- Initrd: 00:12:23.005 ==> default: -- Graphics Type: vnc 00:12:23.005 ==> default: -- Graphics Port: -1 00:12:23.005 ==> default: -- Graphics IP: 127.0.0.1 00:12:23.005 ==> default: -- Graphics Password: Not defined 00:12:23.005 ==> default: -- Video Type: cirrus 00:12:23.005 ==> default: -- Video VRAM: 9216 00:12:23.005 ==> default: -- Sound Type: 00:12:23.005 ==> default: -- Keymap: en-us 00:12:23.005 ==> default: -- TPM Path: 00:12:23.005 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:23.005 ==> default: -- Command line args: 00:12:23.005 ==> default: -> value=-device, 00:12:23.005 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:23.005 ==> default: -> value=-drive, 00:12:23.005 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:12:23.005 ==> default: -> value=-device, 00:12:23.005 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:23.005 ==> default: -> value=-device, 00:12:23.005 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:12:23.005 ==> default: -> value=-drive, 00:12:23.005 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:12:23.005 ==> default: -> value=-device, 00:12:23.005 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:23.005 ==> default: -> value=-drive, 00:12:23.005 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:12:23.005 ==> default: -> value=-device, 00:12:23.005 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:23.005 ==> default: -> value=-drive, 00:12:23.005 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:12:23.005 ==> default: -> value=-device, 00:12:23.005 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:23.274 ==> default: Creating shared folders metadata... 00:12:23.849 Error while activating network: Call to virNetworkCreate failed: Requested operation is not valid: network is already active. 00:12:23.874 [Pipeline] } 00:12:23.898 [Pipeline] // stage 00:12:23.906 [Pipeline] } 00:12:23.927 [Pipeline] // wrap 00:12:23.938 [Pipeline] } 00:12:23.942 ERROR: script returned exit code 1 00:12:23.942 Setting overall build result to FAILURE 00:12:23.958 [Pipeline] // catchError 00:12:23.968 [Pipeline] stage 00:12:23.971 [Pipeline] { (Epilogue) 00:12:23.986 [Pipeline] sh 00:12:24.279 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:12:24.279 jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh: line 9: cd: /var/jenkins/workspace/iscsi-uring-vg-autotest/output: No such file or directory 00:12:24.293 [Pipeline] } 00:12:24.314 [Pipeline] // stage 00:12:24.322 [Pipeline] } 00:12:24.342 [Pipeline] // node 00:12:24.356 [Pipeline] End of Pipeline 00:12:24.379 ERROR: script returned exit code 1 00:12:24.383 Finished: FAILURE