00:00:00.001 Started by upstream project "autotest-per-patch" build number 124201 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.121 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.121 The recommended git tool is: git 00:00:00.122 using credential 00000000-0000-0000-0000-000000000002 00:00:00.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.712 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.724 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.734 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:05.734 > git config core.sparsecheckout # timeout=10 00:00:05.746 > git read-tree -mu HEAD # timeout=10 00:00:05.763 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:05.783 Commit message: "pool: fixes for VisualBuild class" 00:00:05.784 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:05.875 [Pipeline] Start of Pipeline 00:00:05.889 [Pipeline] library 00:00:05.891 Loading library shm_lib@master 00:00:05.891 Library shm_lib@master is cached. Copying from home. 00:00:05.906 [Pipeline] node 00:00:05.927 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:05.930 [Pipeline] { 00:00:05.943 [Pipeline] catchError 00:00:05.945 [Pipeline] { 00:00:05.960 [Pipeline] wrap 00:00:05.970 [Pipeline] { 00:00:05.978 [Pipeline] stage 00:00:05.980 [Pipeline] { (Prologue) 00:00:06.001 [Pipeline] echo 00:00:06.002 Node: VM-host-SM9 00:00:06.009 [Pipeline] cleanWs 00:00:06.019 [WS-CLEANUP] Deleting project workspace... 00:00:06.019 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.025 [WS-CLEANUP] done 00:00:06.222 [Pipeline] setCustomBuildProperty 00:00:06.291 [Pipeline] nodesByLabel 00:00:06.293 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.304 [Pipeline] httpRequest 00:00:06.309 HttpMethod: GET 00:00:06.310 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.319 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.336 Response Code: HTTP/1.1 200 OK 00:00:06.337 Success: Status code 200 is in the accepted range: 200,404 00:00:06.337 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:23.573 [Pipeline] sh 00:00:23.920 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:23.939 [Pipeline] httpRequest 00:00:23.943 HttpMethod: GET 00:00:23.944 URL: http://10.211.164.101/packages/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:00:23.945 Sending request to url: http://10.211.164.101/packages/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:00:23.960 Response Code: HTTP/1.1 200 OK 00:00:23.960 Success: Status code 200 is in the accepted range: 200,404 00:00:23.961 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:01:07.694 [Pipeline] sh 00:01:07.974 + tar --no-same-owner -xf spdk_0a5aebcde18f5ee4c9dba0f68189ed0c7ac9f3cf.tar.gz 00:01:11.310 [Pipeline] sh 00:01:11.585 + git -C spdk log --oneline -n5 00:01:11.586 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:01:11.586 8b1e208cc python/rpc: Python rpc docs generator. 00:01:11.586 98215362c python/rpc: Replace jsonrpc.md with generated docs 00:01:11.586 43217a125 python/rpc: Python rpc call generator. 00:01:11.586 902020273 python/rpc: Replace bdev.py with generated rpc's 00:01:11.604 [Pipeline] writeFile 00:01:11.620 [Pipeline] sh 00:01:11.900 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.912 [Pipeline] sh 00:01:12.193 + cat autorun-spdk.conf 00:01:12.193 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.193 SPDK_TEST_NVMF=1 00:01:12.193 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.193 SPDK_TEST_USDT=1 00:01:12.193 SPDK_TEST_NVMF_MDNS=1 00:01:12.193 SPDK_RUN_UBSAN=1 00:01:12.193 NET_TYPE=virt 00:01:12.193 SPDK_JSONRPC_GO_CLIENT=1 00:01:12.193 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.201 RUN_NIGHTLY=0 00:01:12.203 [Pipeline] } 00:01:12.221 [Pipeline] // stage 00:01:12.236 [Pipeline] stage 00:01:12.238 [Pipeline] { (Run VM) 00:01:12.255 [Pipeline] sh 00:01:12.540 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.540 + echo 'Start stage prepare_nvme.sh' 00:01:12.540 Start stage prepare_nvme.sh 00:01:12.540 + [[ -n 3 ]] 00:01:12.540 + disk_prefix=ex3 00:01:12.540 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:01:12.540 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:01:12.540 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:01:12.540 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.540 ++ SPDK_TEST_NVMF=1 00:01:12.540 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.540 ++ SPDK_TEST_USDT=1 00:01:12.540 ++ SPDK_TEST_NVMF_MDNS=1 00:01:12.540 ++ SPDK_RUN_UBSAN=1 00:01:12.540 ++ NET_TYPE=virt 00:01:12.540 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:12.540 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.540 ++ RUN_NIGHTLY=0 00:01:12.540 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:12.540 + nvme_files=() 00:01:12.540 + declare -A nvme_files 00:01:12.540 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.540 + nvme_files['nvme.img']=5G 00:01:12.540 + nvme_files['nvme-cmb.img']=5G 00:01:12.540 + nvme_files['nvme-multi0.img']=4G 00:01:12.540 + nvme_files['nvme-multi1.img']=4G 00:01:12.540 + nvme_files['nvme-multi2.img']=4G 00:01:12.540 + nvme_files['nvme-openstack.img']=8G 00:01:12.540 + nvme_files['nvme-zns.img']=5G 00:01:12.540 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.540 + (( SPDK_TEST_FTL == 1 )) 00:01:12.540 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.540 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.540 + for nvme in "${!nvme_files[@]}" 00:01:12.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:12.540 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.540 + for nvme in "${!nvme_files[@]}" 00:01:12.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:12.808 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.808 + for nvme in "${!nvme_files[@]}" 00:01:12.808 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:12.808 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.808 + for nvme in "${!nvme_files[@]}" 00:01:12.808 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:12.808 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.808 + for nvme in "${!nvme_files[@]}" 00:01:12.808 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:13.068 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.068 + for nvme in "${!nvme_files[@]}" 00:01:13.068 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:13.327 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.327 + for nvme in "${!nvme_files[@]}" 00:01:13.327 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:13.586 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.586 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:13.586 + echo 'End stage prepare_nvme.sh' 00:01:13.586 End stage prepare_nvme.sh 00:01:13.597 [Pipeline] sh 00:01:13.873 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.873 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:01:13.873 00:01:13.873 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:01:13.873 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:01:13.873 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:13.873 HELP=0 00:01:13.873 DRY_RUN=0 00:01:13.873 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:13.873 NVME_DISKS_TYPE=nvme,nvme, 00:01:13.873 NVME_AUTO_CREATE=0 00:01:13.873 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:13.873 NVME_CMB=,, 00:01:13.873 NVME_PMR=,, 00:01:13.873 NVME_ZNS=,, 00:01:13.873 NVME_MS=,, 00:01:13.873 NVME_FDP=,, 00:01:13.873 SPDK_VAGRANT_DISTRO=fedora38 00:01:13.873 SPDK_VAGRANT_VMCPU=10 00:01:13.873 SPDK_VAGRANT_VMRAM=12288 00:01:13.873 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.873 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.873 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.873 SPDK_OPENSTACK_NETWORK=0 00:01:13.873 VAGRANT_PACKAGE_BOX=0 00:01:13.873 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:13.873 FORCE_DISTRO=true 00:01:13.873 VAGRANT_BOX_VERSION= 00:01:13.873 EXTRA_VAGRANTFILES= 00:01:13.873 NIC_MODEL=e1000 00:01:13.873 00:01:13.873 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:01:13.873 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:17.157 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.724 ==> default: Creating image (snapshot of base box volume). 00:01:17.724 ==> default: Creating domain with the following settings... 00:01:17.724 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718013007_6ea606d3adb1e66e1324 00:01:17.724 ==> default: -- Domain type: kvm 00:01:17.724 ==> default: -- Cpus: 10 00:01:17.724 ==> default: -- Feature: acpi 00:01:17.724 ==> default: -- Feature: apic 00:01:17.724 ==> default: -- Feature: pae 00:01:17.724 ==> default: -- Memory: 12288M 00:01:17.724 ==> default: -- Memory Backing: hugepages: 00:01:17.724 ==> default: -- Management MAC: 00:01:17.724 ==> default: -- Loader: 00:01:17.724 ==> default: -- Nvram: 00:01:17.724 ==> default: -- Base box: spdk/fedora38 00:01:17.724 ==> default: -- Storage pool: default 00:01:17.724 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718013007_6ea606d3adb1e66e1324.img (20G) 00:01:17.724 ==> default: -- Volume Cache: default 00:01:17.724 ==> default: -- Kernel: 00:01:17.724 ==> default: -- Initrd: 00:01:17.724 ==> default: -- Graphics Type: vnc 00:01:17.724 ==> default: -- Graphics Port: -1 00:01:17.724 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.724 ==> default: -- Graphics Password: Not defined 00:01:17.724 ==> default: -- Video Type: cirrus 00:01:17.724 ==> default: -- Video VRAM: 9216 00:01:17.724 ==> default: -- Sound Type: 00:01:17.724 ==> default: -- Keymap: en-us 00:01:17.724 ==> default: -- TPM Path: 00:01:17.724 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.724 ==> default: -- Command line args: 00:01:17.724 ==> default: -> value=-device, 00:01:17.724 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:17.724 ==> default: -> value=-drive, 00:01:17.724 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:17.724 ==> default: -> value=-device, 00:01:17.724 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.724 ==> default: -> value=-device, 00:01:17.724 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:17.724 ==> default: -> value=-drive, 00:01:17.724 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:17.724 ==> default: -> value=-device, 00:01:17.724 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.724 ==> default: -> value=-drive, 00:01:17.724 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:17.724 ==> default: -> value=-device, 00:01:17.724 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.724 ==> default: -> value=-drive, 00:01:17.724 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:17.724 ==> default: -> value=-device, 00:01:17.724 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.994 ==> default: Creating shared folders metadata... 00:01:17.994 ==> default: Starting domain. 00:01:19.385 ==> default: Waiting for domain to get an IP address... 00:01:37.482 ==> default: Waiting for SSH to become available... 00:01:37.482 ==> default: Configuring and enabling network interfaces... 00:01:40.038 default: SSH address: 192.168.121.210:22 00:01:40.038 default: SSH username: vagrant 00:01:40.038 default: SSH auth method: private key 00:01:41.941 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:50.055 ==> default: Mounting SSHFS shared folder... 00:01:50.993 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:50.993 ==> default: Checking Mount.. 00:01:52.382 ==> default: Folder Successfully Mounted! 00:01:52.382 ==> default: Running provisioner: file... 00:01:53.317 default: ~/.gitconfig => .gitconfig 00:01:53.576 00:01:53.576 SUCCESS! 00:01:53.576 00:01:53.576 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:53.576 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.576 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:53.576 00:01:53.585 [Pipeline] } 00:01:53.605 [Pipeline] // stage 00:01:53.615 [Pipeline] dir 00:01:53.616 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:53.618 [Pipeline] { 00:01:53.633 [Pipeline] catchError 00:01:53.635 [Pipeline] { 00:01:53.650 [Pipeline] sh 00:01:53.929 + vagrant ssh-config --host vagrant 00:01:53.929 + sed -ne /^Host/,$p 00:01:53.929 + tee ssh_conf 00:01:58.120 Host vagrant 00:01:58.120 HostName 192.168.121.210 00:01:58.120 User vagrant 00:01:58.120 Port 22 00:01:58.120 UserKnownHostsFile /dev/null 00:01:58.120 StrictHostKeyChecking no 00:01:58.120 PasswordAuthentication no 00:01:58.120 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:58.120 IdentitiesOnly yes 00:01:58.120 LogLevel FATAL 00:01:58.120 ForwardAgent yes 00:01:58.120 ForwardX11 yes 00:01:58.120 00:01:58.136 [Pipeline] withEnv 00:01:58.138 [Pipeline] { 00:01:58.155 [Pipeline] sh 00:01:58.435 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:58.435 source /etc/os-release 00:01:58.435 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.435 # Minimal, systemd-like check. 00:01:58.435 if [[ -e /.dockerenv ]]; then 00:01:58.435 # Clear garbage from the node's name: 00:01:58.435 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.435 # $HOSTNAME is the actual container id 00:01:58.435 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.435 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.435 # We can assume this is a mount from a host where container is running, 00:01:58.435 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.435 container="$(< /etc/hostname) ($agent)" 00:01:58.435 else 00:01:58.435 # Fallback 00:01:58.435 container=$agent 00:01:58.435 fi 00:01:58.435 fi 00:01:58.435 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.435 00:01:58.705 [Pipeline] } 00:01:58.726 [Pipeline] // withEnv 00:01:58.735 [Pipeline] setCustomBuildProperty 00:01:58.751 [Pipeline] stage 00:01:58.754 [Pipeline] { (Tests) 00:01:58.774 [Pipeline] sh 00:01:59.055 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:59.327 [Pipeline] sh 00:01:59.607 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.882 [Pipeline] timeout 00:01:59.882 Timeout set to expire in 40 min 00:01:59.884 [Pipeline] { 00:01:59.902 [Pipeline] sh 00:02:00.182 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:00.750 HEAD is now at 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:02:00.762 [Pipeline] sh 00:02:01.042 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:01.314 [Pipeline] sh 00:02:01.593 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.611 [Pipeline] sh 00:02:01.891 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:01.891 ++ readlink -f spdk_repo 00:02:01.891 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.891 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.891 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.891 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.891 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.891 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.891 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.891 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:01.891 + cd /home/vagrant/spdk_repo 00:02:01.891 + source /etc/os-release 00:02:01.891 ++ NAME='Fedora Linux' 00:02:01.891 ++ VERSION='38 (Cloud Edition)' 00:02:01.891 ++ ID=fedora 00:02:01.891 ++ VERSION_ID=38 00:02:01.891 ++ VERSION_CODENAME= 00:02:01.891 ++ PLATFORM_ID=platform:f38 00:02:01.891 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:01.891 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.891 ++ LOGO=fedora-logo-icon 00:02:01.891 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:01.891 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.891 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:01.891 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.891 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.891 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.891 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:01.891 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.891 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:01.891 ++ SUPPORT_END=2024-05-14 00:02:01.891 ++ VARIANT='Cloud Edition' 00:02:01.891 ++ VARIANT_ID=cloud 00:02:01.891 + uname -a 00:02:02.149 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:02.149 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.408 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:02.408 Hugepages 00:02:02.408 node hugesize free / total 00:02:02.408 node0 1048576kB 0 / 0 00:02:02.408 node0 2048kB 0 / 0 00:02:02.408 00:02:02.408 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.408 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.408 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.667 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.667 + rm -f /tmp/spdk-ld-path 00:02:02.667 + source autorun-spdk.conf 00:02:02.667 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.667 ++ SPDK_TEST_NVMF=1 00:02:02.667 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.667 ++ SPDK_TEST_USDT=1 00:02:02.667 ++ SPDK_TEST_NVMF_MDNS=1 00:02:02.667 ++ SPDK_RUN_UBSAN=1 00:02:02.667 ++ NET_TYPE=virt 00:02:02.667 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:02.667 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.667 ++ RUN_NIGHTLY=0 00:02:02.667 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.667 + [[ -n '' ]] 00:02:02.667 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.667 + for M in /var/spdk/build-*-manifest.txt 00:02:02.667 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.667 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.667 + for M in /var/spdk/build-*-manifest.txt 00:02:02.667 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.667 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.667 ++ uname 00:02:02.667 + [[ Linux == \L\i\n\u\x ]] 00:02:02.667 + sudo dmesg -T 00:02:02.667 + sudo dmesg --clear 00:02:02.667 + dmesg_pid=5149 00:02:02.667 + [[ Fedora Linux == FreeBSD ]] 00:02:02.667 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.667 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.667 + sudo dmesg -Tw 00:02:02.667 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.667 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.667 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.667 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.667 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.667 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.667 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.667 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.667 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.667 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.667 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.667 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.667 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.667 Test configuration: 00:02:02.667 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.667 SPDK_TEST_NVMF=1 00:02:02.667 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.667 SPDK_TEST_USDT=1 00:02:02.667 SPDK_TEST_NVMF_MDNS=1 00:02:02.667 SPDK_RUN_UBSAN=1 00:02:02.667 NET_TYPE=virt 00:02:02.667 SPDK_JSONRPC_GO_CLIENT=1 00:02:02.667 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.667 RUN_NIGHTLY=0 09:50:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.667 09:50:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.667 09:50:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.667 09:50:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.667 09:50:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.667 09:50:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.667 09:50:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.667 09:50:52 -- paths/export.sh@5 -- $ export PATH 00:02:02.667 09:50:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.667 09:50:52 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.667 09:50:52 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:02.667 09:50:52 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718013052.XXXXXX 00:02:02.667 09:50:52 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718013052.aaL6l0 00:02:02.667 09:50:52 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:02.667 09:50:52 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:02.667 09:50:52 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.667 09:50:52 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.667 09:50:52 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.667 09:50:52 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:02.667 09:50:52 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:02.667 09:50:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.667 09:50:52 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:02.667 09:50:52 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:02.667 09:50:52 -- pm/common@17 -- $ local monitor 00:02:02.667 09:50:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.667 09:50:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.667 09:50:52 -- pm/common@25 -- $ sleep 1 00:02:02.667 09:50:52 -- pm/common@21 -- $ date +%s 00:02:02.667 09:50:52 -- pm/common@21 -- $ date +%s 00:02:02.667 09:50:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718013052 00:02:02.926 09:50:52 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718013052 00:02:02.926 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718013052_collect-cpu-load.pm.log 00:02:02.926 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718013052_collect-vmstat.pm.log 00:02:02.926 Traceback (most recent call last): 00:02:02.926 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:02:02.926 import spdk.rpc as rpc # noqa 00:02:02.926 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:02.926 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/__init__.py", line 13, in 00:02:02.926 from . import bdev 00:02:02.926 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/bdev.py", line 6, in 00:02:02.926 from spdk.rpc.rpc import * 00:02:02.926 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:02:03.861 09:50:53 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:03.861 09:50:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.861 09:50:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.861 09:50:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.861 09:50:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.861 Mon Jun 10 09:50:53 AM UTC 2024 00:02:03.861 09:50:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.861 v24.09-pre-63-g0a5aebcde 00:02:03.861 09:50:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:03.861 09:50:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.861 09:50:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.861 09:50:53 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:03.862 09:50:53 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:03.862 09:50:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.862 ************************************ 00:02:03.862 START TEST ubsan 00:02:03.862 ************************************ 00:02:03.862 using ubsan 00:02:03.862 09:50:53 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:03.862 00:02:03.862 real 0m0.000s 00:02:03.862 user 0m0.000s 00:02:03.862 sys 0m0.000s 00:02:03.862 09:50:53 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:03.862 09:50:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.862 ************************************ 00:02:03.862 END TEST ubsan 00:02:03.862 ************************************ 00:02:03.862 09:50:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.862 09:50:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.862 09:50:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.862 09:50:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.862 09:50:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.862 09:50:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.862 09:50:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.862 09:50:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.862 09:50:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:03.862 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:03.862 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.427 Using 'verbs' RDMA provider 00:02:20.233 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:30.230 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:30.486 go version go1.21.1 linux/amd64 00:02:30.744 Creating mk/config.mk...done. 00:02:30.744 Creating mk/cc.flags.mk...done. 00:02:30.744 Type 'make' to build. 00:02:30.744 09:51:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:30.744 09:51:20 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:30.744 09:51:20 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:30.744 09:51:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.744 ************************************ 00:02:30.744 START TEST make 00:02:30.744 ************************************ 00:02:30.744 09:51:20 make -- common/autotest_common.sh@1124 -- $ make -j10 00:02:31.001 go: downloading golang.org/x/text v0.14.0 00:02:43.194 2024/06/10 09:51:30 error when reading a file at path: /home/vagrant/spdk_repo/spdk/schema/rpc.json, err: open /home/vagrant/spdk_repo/spdk/schema/rpc.json: no such file or directory 00:02:43.194 make[1]: *** [Makefile:27: structs] Error 1 00:02:43.194 make: *** [/home/vagrant/spdk_repo/spdk/mk/spdk.subdirs.mk:16: go/rpc] Error 2 00:02:43.194 make: *** Waiting for unfinished jobs.... 00:03:05.112 09:51:52 make -- common/autotest_common.sh@1124 -- $ trap - ERR 00:03:05.112 09:51:52 make -- common/autotest_common.sh@1124 -- $ print_backtrace 00:03:05.112 09:51:52 make -- common/autotest_common.sh@1152 -- $ [[ ehxBET =~ e ]] 00:03:05.112 09:51:52 make -- common/autotest_common.sh@1154 -- $ args=('-j10' 'make' 'make' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:03:05.112 09:51:52 make -- common/autotest_common.sh@1154 -- $ local args 00:03:05.112 09:51:52 make -- common/autotest_common.sh@1156 -- $ xtrace_disable 00:03:05.112 09:51:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:05.112 ========== Backtrace start: ========== 00:03:05.112 00:03:05.112 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1124 -> run_test(["make"],["make"],["-j10"]) 00:03:05.112 ... 00:03:05.112 1119 timing_enter $test_name 00:03:05.112 1120 echo "************************************" 00:03:05.112 1121 echo "START TEST $test_name" 00:03:05.112 1122 echo "************************************" 00:03:05.112 1123 xtrace_restore 00:03:05.112 1124 time "$@" 00:03:05.112 1125 xtrace_disable 00:03:05.112 1126 echo "************************************" 00:03:05.112 1127 echo "END TEST $test_name" 00:03:05.112 1128 echo "************************************" 00:03:05.112 1129 timing_exit $test_name 00:03:05.112 ... 00:03:05.112 in /home/vagrant/spdk_repo/spdk/autobuild.sh:69 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:03:05.112 ... 00:03:05.112 64 $rootdir/configure $config_params 00:03:05.112 65 else 00:03:05.112 66 # if we aren't testing the unittests, build with shared objects. 00:03:05.112 67 $rootdir/configure $config_params --with-shared 00:03:05.113 68 fi 00:03:05.113 => 69 run_test "make" $MAKE $MAKEFLAGS 00:03:05.113 70 fi 00:03:05.113 ... 00:03:05.113 00:03:05.113 ========== Backtrace end ========== 00:03:05.113 09:51:52 make -- common/autotest_common.sh@1193 -- $ return 0 00:03:05.113 00:03:05.113 real 0m32.002s 00:03:05.113 user 3m35.384s 00:03:05.113 sys 0m27.130s 00:03:05.113 09:51:52 make -- common/autotest_common.sh@1 -- $ stop_monitor_resources 00:03:05.113 09:51:52 make -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:05.113 09:51:52 make -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:05.113 09:51:52 make -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.113 09:51:52 make -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:05.113 09:51:52 make -- pm/common@44 -- $ pid=5184 00:03:05.113 09:51:52 make -- pm/common@50 -- $ kill -TERM 5184 00:03:05.113 09:51:52 make -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:05.113 09:51:52 make -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:05.113 09:51:52 make -- pm/common@44 -- $ pid=5186 00:03:05.113 09:51:52 make -- pm/common@50 -- $ kill -TERM 5186 00:03:05.125 [Pipeline] } 00:03:05.148 [Pipeline] // timeout 00:03:05.156 [Pipeline] } 00:03:05.176 [Pipeline] // stage 00:03:05.184 [Pipeline] } 00:03:05.188 ERROR: script returned exit code 2 00:03:05.188 Setting overall build result to FAILURE 00:03:05.207 [Pipeline] // catchError 00:03:05.216 [Pipeline] stage 00:03:05.218 [Pipeline] { (Stop VM) 00:03:05.232 [Pipeline] sh 00:03:05.512 + vagrant halt 00:03:08.807 ==> default: Halting domain... 00:03:15.378 [Pipeline] sh 00:03:15.654 + vagrant destroy -f 00:03:19.840 ==> default: Removing domain... 00:03:19.851 [Pipeline] sh 00:03:20.127 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:03:20.136 [Pipeline] } 00:03:20.155 [Pipeline] // stage 00:03:20.161 [Pipeline] } 00:03:20.178 [Pipeline] // dir 00:03:20.183 [Pipeline] } 00:03:20.201 [Pipeline] // wrap 00:03:20.207 [Pipeline] } 00:03:20.221 [Pipeline] // catchError 00:03:20.233 [Pipeline] stage 00:03:20.235 [Pipeline] { (Epilogue) 00:03:20.249 [Pipeline] sh 00:03:20.527 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:03:20.537 [Pipeline] catchError 00:03:20.539 [Pipeline] { 00:03:20.552 [Pipeline] sh 00:03:20.888 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:03:20.888 Artifacts sizes are good 00:03:20.897 [Pipeline] } 00:03:20.916 [Pipeline] // catchError 00:03:20.927 [Pipeline] archiveArtifacts 00:03:20.934 Archiving artifacts 00:03:20.969 [Pipeline] cleanWs 00:03:20.979 [WS-CLEANUP] Deleting project workspace... 00:03:20.979 [WS-CLEANUP] Deferred wipeout is used... 00:03:20.985 [WS-CLEANUP] done 00:03:20.986 [Pipeline] } 00:03:21.003 [Pipeline] // stage 00:03:21.009 [Pipeline] } 00:03:21.026 [Pipeline] // node 00:03:21.032 [Pipeline] End of Pipeline 00:03:21.070 Finished: FAILURE