00:00:00.000 Started by upstream project "autotest-per-patch" build number 124205 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:08.378 The recommended git tool is: git 00:00:08.378 using credential 00000000-0000-0000-0000-000000000002 00:00:08.381 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:08.395 Fetching changes from the remote Git repository 00:00:08.397 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:08.407 Using shallow fetch with depth 1 00:00:08.408 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:08.408 > git --version # timeout=10 00:00:08.417 > git --version # 'git version 2.39.2' 00:00:08.417 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:08.428 Setting http proxy: proxy-dmz.intel.com:911 00:00:08.428 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:23.493 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:23.505 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:23.519 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:23.519 > git config core.sparsecheckout # timeout=10 00:00:23.531 > git read-tree -mu HEAD # timeout=10 00:00:23.548 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:23.568 Commit message: "pool: fixes for VisualBuild class" 00:00:23.568 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:23.661 [Pipeline] Start of Pipeline 00:00:23.679 [Pipeline] library 00:00:23.681 Loading library shm_lib@master 00:00:23.681 Library shm_lib@master is cached. Copying from home. 00:00:23.699 [Pipeline] node 00:00:23.712 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:23.714 [Pipeline] { 00:00:23.726 [Pipeline] catchError 00:00:23.728 [Pipeline] { 00:00:23.745 [Pipeline] wrap 00:00:23.756 [Pipeline] { 00:00:23.768 [Pipeline] stage 00:00:23.770 [Pipeline] { (Prologue) 00:00:23.806 [Pipeline] echo 00:00:23.808 Node: VM-host-SM4 00:00:23.816 [Pipeline] cleanWs 00:00:23.826 [WS-CLEANUP] Deleting project workspace... 00:00:23.827 [WS-CLEANUP] Deferred wipeout is used... 00:00:23.834 [WS-CLEANUP] done 00:00:24.004 [Pipeline] setCustomBuildProperty 00:00:24.079 [Pipeline] nodesByLabel 00:00:24.081 Found a total of 2 nodes with the 'sorcerer' label 00:00:24.095 [Pipeline] httpRequest 00:00:24.100 HttpMethod: GET 00:00:24.101 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:24.102 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:24.113 Response Code: HTTP/1.1 200 OK 00:00:24.114 Success: Status code 200 is in the accepted range: 200,404 00:00:24.115 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:26.489 [Pipeline] sh 00:00:26.765 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:26.778 [Pipeline] httpRequest 00:00:26.781 HttpMethod: GET 00:00:26.781 URL: http://10.211.164.101/packages/spdk_3c7f5112b1c9d0876747afe95c1dfdcb4cd3b40a.tar.gz 00:00:26.782 Sending request to url: http://10.211.164.101/packages/spdk_3c7f5112b1c9d0876747afe95c1dfdcb4cd3b40a.tar.gz 00:00:26.805 Response Code: HTTP/1.1 200 OK 00:00:26.805 Success: Status code 200 is in the accepted range: 200,404 00:00:26.806 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_3c7f5112b1c9d0876747afe95c1dfdcb4cd3b40a.tar.gz 00:01:58.449 [Pipeline] sh 00:01:58.728 + tar --no-same-owner -xf spdk_3c7f5112b1c9d0876747afe95c1dfdcb4cd3b40a.tar.gz 00:02:02.020 [Pipeline] sh 00:02:02.299 + git -C spdk log --oneline -n5 00:02:02.299 3c7f5112b go/rpc: Implementation of wrapper for go-rpc client 00:02:02.299 0a5aebcde go/rpc: Initial implementation of rpc call generator 00:02:02.299 8b1e208cc python/rpc: Python rpc docs generator. 00:02:02.299 98215362c python/rpc: Replace jsonrpc.md with generated docs 00:02:02.299 43217a125 python/rpc: Python rpc call generator. 00:02:02.318 [Pipeline] writeFile 00:02:02.334 [Pipeline] sh 00:02:02.609 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:02.620 [Pipeline] sh 00:02:02.894 + cat autorun-spdk.conf 00:02:02.894 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.894 SPDK_TEST_NVMF=1 00:02:02.894 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.894 SPDK_TEST_USDT=1 00:02:02.894 SPDK_TEST_NVMF_MDNS=1 00:02:02.894 SPDK_RUN_UBSAN=1 00:02:02.894 NET_TYPE=virt 00:02:02.894 SPDK_JSONRPC_GO_CLIENT=1 00:02:02.894 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.901 RUN_NIGHTLY=0 00:02:02.903 [Pipeline] } 00:02:02.920 [Pipeline] // stage 00:02:02.933 [Pipeline] stage 00:02:02.935 [Pipeline] { (Run VM) 00:02:02.949 [Pipeline] sh 00:02:03.229 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:03.229 + echo 'Start stage prepare_nvme.sh' 00:02:03.229 Start stage prepare_nvme.sh 00:02:03.229 + [[ -n 4 ]] 00:02:03.229 + disk_prefix=ex4 00:02:03.229 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:02:03.229 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:02:03.229 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:02:03.229 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.229 ++ SPDK_TEST_NVMF=1 00:02:03.229 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:03.229 ++ SPDK_TEST_USDT=1 00:02:03.229 ++ SPDK_TEST_NVMF_MDNS=1 00:02:03.229 ++ SPDK_RUN_UBSAN=1 00:02:03.229 ++ NET_TYPE=virt 00:02:03.229 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:03.229 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:03.229 ++ RUN_NIGHTLY=0 00:02:03.229 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:03.229 + nvme_files=() 00:02:03.229 + declare -A nvme_files 00:02:03.229 + backend_dir=/var/lib/libvirt/images/backends 00:02:03.229 + nvme_files['nvme.img']=5G 00:02:03.229 + nvme_files['nvme-cmb.img']=5G 00:02:03.229 + nvme_files['nvme-multi0.img']=4G 00:02:03.229 + nvme_files['nvme-multi1.img']=4G 00:02:03.229 + nvme_files['nvme-multi2.img']=4G 00:02:03.229 + nvme_files['nvme-openstack.img']=8G 00:02:03.229 + nvme_files['nvme-zns.img']=5G 00:02:03.229 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:03.229 + (( SPDK_TEST_FTL == 1 )) 00:02:03.229 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:03.229 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:03.229 + for nvme in "${!nvme_files[@]}" 00:02:03.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:03.230 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.230 + for nvme in "${!nvme_files[@]}" 00:02:03.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:03.230 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.230 + for nvme in "${!nvme_files[@]}" 00:02:03.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:03.230 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:03.230 + for nvme in "${!nvme_files[@]}" 00:02:03.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:03.230 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:03.230 + for nvme in "${!nvme_files[@]}" 00:02:03.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:03.230 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.230 + for nvme in "${!nvme_files[@]}" 00:02:03.230 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:03.487 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:03.487 + for nvme in "${!nvme_files[@]}" 00:02:03.487 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:04.862 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:04.862 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:04.862 + echo 'End stage prepare_nvme.sh' 00:02:04.862 End stage prepare_nvme.sh 00:02:04.876 [Pipeline] sh 00:02:05.155 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:05.155 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:02:05.155 00:02:05.155 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:02:05.155 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:02:05.155 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:05.155 HELP=0 00:02:05.155 DRY_RUN=0 00:02:05.155 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:05.155 NVME_DISKS_TYPE=nvme,nvme, 00:02:05.155 NVME_AUTO_CREATE=0 00:02:05.155 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:05.155 NVME_CMB=,, 00:02:05.155 NVME_PMR=,, 00:02:05.155 NVME_ZNS=,, 00:02:05.155 NVME_MS=,, 00:02:05.155 NVME_FDP=,, 00:02:05.155 SPDK_VAGRANT_DISTRO=fedora38 00:02:05.155 SPDK_VAGRANT_VMCPU=10 00:02:05.155 SPDK_VAGRANT_VMRAM=12288 00:02:05.155 SPDK_VAGRANT_PROVIDER=libvirt 00:02:05.155 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:05.155 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:05.155 SPDK_OPENSTACK_NETWORK=0 00:02:05.155 VAGRANT_PACKAGE_BOX=0 00:02:05.155 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:05.155 FORCE_DISTRO=true 00:02:05.155 VAGRANT_BOX_VERSION= 00:02:05.155 EXTRA_VAGRANTFILES= 00:02:05.155 NIC_MODEL=e1000 00:02:05.155 00:02:05.155 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:02:05.155 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:02:08.454 Bringing machine 'default' up with 'libvirt' provider... 00:02:09.021 ==> default: Creating image (snapshot of base box volume). 00:02:09.280 ==> default: Creating domain with the following settings... 00:02:09.280 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718014354_a44979069ae2187fa7bc 00:02:09.280 ==> default: -- Domain type: kvm 00:02:09.280 ==> default: -- Cpus: 10 00:02:09.280 ==> default: -- Feature: acpi 00:02:09.280 ==> default: -- Feature: apic 00:02:09.280 ==> default: -- Feature: pae 00:02:09.280 ==> default: -- Memory: 12288M 00:02:09.280 ==> default: -- Memory Backing: hugepages: 00:02:09.280 ==> default: -- Management MAC: 00:02:09.280 ==> default: -- Loader: 00:02:09.280 ==> default: -- Nvram: 00:02:09.280 ==> default: -- Base box: spdk/fedora38 00:02:09.280 ==> default: -- Storage pool: default 00:02:09.280 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1718014354_a44979069ae2187fa7bc.img (20G) 00:02:09.280 ==> default: -- Volume Cache: default 00:02:09.280 ==> default: -- Kernel: 00:02:09.280 ==> default: -- Initrd: 00:02:09.280 ==> default: -- Graphics Type: vnc 00:02:09.280 ==> default: -- Graphics Port: -1 00:02:09.280 ==> default: -- Graphics IP: 127.0.0.1 00:02:09.280 ==> default: -- Graphics Password: Not defined 00:02:09.280 ==> default: -- Video Type: cirrus 00:02:09.280 ==> default: -- Video VRAM: 9216 00:02:09.280 ==> default: -- Sound Type: 00:02:09.280 ==> default: -- Keymap: en-us 00:02:09.280 ==> default: -- TPM Path: 00:02:09.280 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:09.281 ==> default: -- Command line args: 00:02:09.281 ==> default: -> value=-device, 00:02:09.281 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:09.281 ==> default: -> value=-drive, 00:02:09.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:09.281 ==> default: -> value=-device, 00:02:09.281 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.281 ==> default: -> value=-device, 00:02:09.281 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:09.281 ==> default: -> value=-drive, 00:02:09.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:09.281 ==> default: -> value=-device, 00:02:09.281 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.281 ==> default: -> value=-drive, 00:02:09.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:09.281 ==> default: -> value=-device, 00:02:09.281 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.281 ==> default: -> value=-drive, 00:02:09.281 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:09.281 ==> default: -> value=-device, 00:02:09.281 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:09.538 ==> default: Creating shared folders metadata... 00:02:09.538 ==> default: Starting domain. 00:02:11.441 ==> default: Waiting for domain to get an IP address... 00:02:29.592 ==> default: Waiting for SSH to become available... 00:02:29.592 ==> default: Configuring and enabling network interfaces... 00:02:33.777 default: SSH address: 192.168.121.89:22 00:02:33.777 default: SSH username: vagrant 00:02:33.777 default: SSH auth method: private key 00:02:36.306 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:44.425 ==> default: Mounting SSHFS shared folder... 00:02:45.845 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:45.845 ==> default: Checking Mount.. 00:02:47.230 ==> default: Folder Successfully Mounted! 00:02:47.230 ==> default: Running provisioner: file... 00:02:48.166 default: ~/.gitconfig => .gitconfig 00:02:48.733 00:02:48.733 SUCCESS! 00:02:48.733 00:02:48.733 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:48.733 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:48.733 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:48.733 00:02:48.741 [Pipeline] } 00:02:48.760 [Pipeline] // stage 00:02:48.770 [Pipeline] dir 00:02:48.770 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:48.772 [Pipeline] { 00:02:48.790 [Pipeline] catchError 00:02:48.792 [Pipeline] { 00:02:48.809 [Pipeline] sh 00:02:49.089 + vagrant ssh-config --host vagrant 00:02:49.089 + sed -ne /^Host/,$p 00:02:49.089 + tee ssh_conf 00:02:53.284 Host vagrant 00:02:53.284 HostName 192.168.121.89 00:02:53.284 User vagrant 00:02:53.284 Port 22 00:02:53.284 UserKnownHostsFile /dev/null 00:02:53.284 StrictHostKeyChecking no 00:02:53.284 PasswordAuthentication no 00:02:53.284 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:53.284 IdentitiesOnly yes 00:02:53.284 LogLevel FATAL 00:02:53.284 ForwardAgent yes 00:02:53.284 ForwardX11 yes 00:02:53.284 00:02:53.297 [Pipeline] withEnv 00:02:53.299 [Pipeline] { 00:02:53.314 [Pipeline] sh 00:02:53.591 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:53.591 source /etc/os-release 00:02:53.591 [[ -e /image.version ]] && img=$(< /image.version) 00:02:53.591 # Minimal, systemd-like check. 00:02:53.591 if [[ -e /.dockerenv ]]; then 00:02:53.591 # Clear garbage from the node's name: 00:02:53.591 # agt-er_autotest_547-896 -> autotest_547-896 00:02:53.591 # $HOSTNAME is the actual container id 00:02:53.591 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:53.591 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:53.591 # We can assume this is a mount from a host where container is running, 00:02:53.591 # so fetch its hostname to easily identify the target swarm worker. 00:02:53.591 container="$(< /etc/hostname) ($agent)" 00:02:53.591 else 00:02:53.591 # Fallback 00:02:53.592 container=$agent 00:02:53.592 fi 00:02:53.592 fi 00:02:53.592 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:53.592 00:02:53.859 [Pipeline] } 00:02:53.878 [Pipeline] // withEnv 00:02:53.885 [Pipeline] setCustomBuildProperty 00:02:53.898 [Pipeline] stage 00:02:53.900 [Pipeline] { (Tests) 00:02:53.916 [Pipeline] sh 00:02:54.191 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.462 [Pipeline] sh 00:02:54.816 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:54.834 [Pipeline] timeout 00:02:54.834 Timeout set to expire in 40 min 00:02:54.837 [Pipeline] { 00:02:54.854 [Pipeline] sh 00:02:55.133 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:55.702 HEAD is now at 3c7f5112b go/rpc: Implementation of wrapper for go-rpc client 00:02:55.716 [Pipeline] sh 00:02:55.997 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:56.266 [Pipeline] sh 00:02:56.538 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.812 [Pipeline] sh 00:02:57.091 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:57.092 ++ readlink -f spdk_repo 00:02:57.092 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:57.092 + [[ -n /home/vagrant/spdk_repo ]] 00:02:57.092 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:57.092 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:57.092 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:57.092 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:57.092 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:57.092 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:57.092 + cd /home/vagrant/spdk_repo 00:02:57.092 + source /etc/os-release 00:02:57.092 ++ NAME='Fedora Linux' 00:02:57.092 ++ VERSION='38 (Cloud Edition)' 00:02:57.092 ++ ID=fedora 00:02:57.092 ++ VERSION_ID=38 00:02:57.092 ++ VERSION_CODENAME= 00:02:57.092 ++ PLATFORM_ID=platform:f38 00:02:57.092 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:57.092 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:57.092 ++ LOGO=fedora-logo-icon 00:02:57.092 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:57.092 ++ HOME_URL=https://fedoraproject.org/ 00:02:57.092 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:57.092 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:57.092 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:57.092 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:57.092 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:57.092 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:57.092 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:57.092 ++ SUPPORT_END=2024-05-14 00:02:57.092 ++ VARIANT='Cloud Edition' 00:02:57.092 ++ VARIANT_ID=cloud 00:02:57.092 + uname -a 00:02:57.350 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:57.350 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.609 Hugepages 00:02:57.609 node hugesize free / total 00:02:57.609 node0 1048576kB 0 / 0 00:02:57.609 node0 2048kB 0 / 0 00:02:57.609 00:02:57.609 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.867 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.867 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:57.867 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:57.867 + rm -f /tmp/spdk-ld-path 00:02:57.867 + source autorun-spdk.conf 00:02:57.867 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.867 ++ SPDK_TEST_NVMF=1 00:02:57.867 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:57.867 ++ SPDK_TEST_USDT=1 00:02:57.867 ++ SPDK_TEST_NVMF_MDNS=1 00:02:57.867 ++ SPDK_RUN_UBSAN=1 00:02:57.867 ++ NET_TYPE=virt 00:02:57.867 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:57.867 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.867 ++ RUN_NIGHTLY=0 00:02:57.867 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:57.867 + [[ -n '' ]] 00:02:57.867 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:57.867 + for M in /var/spdk/build-*-manifest.txt 00:02:57.867 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:57.867 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.867 + for M in /var/spdk/build-*-manifest.txt 00:02:57.867 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:57.867 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.867 ++ uname 00:02:57.867 + [[ Linux == \L\i\n\u\x ]] 00:02:57.867 + sudo dmesg -T 00:02:57.867 + sudo dmesg --clear 00:02:57.867 + dmesg_pid=5159 00:02:57.867 + sudo dmesg -Tw 00:02:57.867 + [[ Fedora Linux == FreeBSD ]] 00:02:57.867 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.867 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.867 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:57.867 + [[ -x /usr/src/fio-static/fio ]] 00:02:57.867 + export FIO_BIN=/usr/src/fio-static/fio 00:02:57.867 + FIO_BIN=/usr/src/fio-static/fio 00:02:57.867 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:57.867 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:57.867 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:57.868 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.868 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.868 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:57.868 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.868 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.868 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.868 Test configuration: 00:02:57.868 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.868 SPDK_TEST_NVMF=1 00:02:57.868 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:57.868 SPDK_TEST_USDT=1 00:02:57.868 SPDK_TEST_NVMF_MDNS=1 00:02:57.868 SPDK_RUN_UBSAN=1 00:02:57.868 NET_TYPE=virt 00:02:57.868 SPDK_JSONRPC_GO_CLIENT=1 00:02:57.868 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:58.126 RUN_NIGHTLY=0 10:13:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:58.126 10:13:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:58.126 10:13:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.126 10:13:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.126 10:13:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.126 10:13:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.126 10:13:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.126 10:13:23 -- paths/export.sh@5 -- $ export PATH 00:02:58.126 10:13:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.127 10:13:23 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:58.127 10:13:23 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:58.127 10:13:23 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718014403.XXXXXX 00:02:58.127 10:13:23 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718014403.4Xs5Cv 00:02:58.127 10:13:23 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:58.127 10:13:23 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:58.127 10:13:23 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:58.127 10:13:23 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:58.127 10:13:23 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:58.127 10:13:23 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:58.127 10:13:23 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:58.127 10:13:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.127 10:13:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:58.127 10:13:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:58.127 10:13:24 -- pm/common@17 -- $ local monitor 00:02:58.127 10:13:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.127 10:13:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.127 10:13:24 -- pm/common@25 -- $ sleep 1 00:02:58.127 10:13:24 -- pm/common@21 -- $ date +%s 00:02:58.127 10:13:24 -- pm/common@21 -- $ date +%s 00:02:58.127 10:13:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718014404 00:02:58.127 10:13:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1718014404 00:02:58.127 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718014404_collect-vmstat.pm.log 00:02:58.127 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1718014404_collect-cpu-load.pm.log 00:02:58.127 Traceback (most recent call last): 00:02:58.127 File "/home/vagrant/spdk_repo/spdk/scripts/rpc.py", line 24, in 00:02:58.127 import spdk.rpc as rpc # noqa 00:02:58.127 ^^^^^^^^^^^^^^^^^^^^^^ 00:02:58.127 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/__init__.py", line 13, in 00:02:58.127 from . import bdev 00:02:58.127 File "/home/vagrant/spdk_repo/spdk/python/spdk/rpc/bdev.py", line 6, in 00:02:58.127 from spdk.rpc.rpc import * 00:02:58.127 ModuleNotFoundError: No module named 'spdk.rpc.rpc' 00:02:59.122 10:13:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:59.122 10:13:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:59.122 10:13:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:59.122 10:13:25 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.122 10:13:25 -- spdk/autobuild.sh@16 -- $ date -u 00:02:59.122 Mon Jun 10 10:13:25 AM UTC 2024 00:02:59.122 10:13:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:59.122 v24.09-pre-64-g3c7f5112b 00:02:59.122 10:13:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:59.122 10:13:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:59.122 10:13:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:59.122 10:13:25 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:59.122 10:13:25 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:59.122 10:13:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.122 ************************************ 00:02:59.122 START TEST ubsan 00:02:59.122 ************************************ 00:02:59.122 using ubsan 00:02:59.122 10:13:25 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:59.122 00:02:59.122 real 0m0.000s 00:02:59.122 user 0m0.000s 00:02:59.122 sys 0m0.000s 00:02:59.122 10:13:25 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:59.122 ************************************ 00:02:59.122 END TEST ubsan 00:02:59.122 ************************************ 00:02:59.122 10:13:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:59.122 10:13:25 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:59.122 10:13:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.122 10:13:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.122 10:13:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.122 10:13:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.122 10:13:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.122 10:13:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.122 10:13:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.122 10:13:25 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:59.122 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:59.122 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:59.690 Using 'verbs' RDMA provider 00:03:15.585 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:27.790 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:27.790 go version go1.21.1 linux/amd64 00:03:27.790 Creating mk/config.mk...done. 00:03:27.790 Creating mk/cc.flags.mk...done. 00:03:27.790 Type 'make' to build. 00:03:27.790 10:13:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:27.790 10:13:53 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:03:27.790 10:13:53 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:03:27.790 10:13:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.790 ************************************ 00:03:27.790 START TEST make 00:03:27.790 ************************************ 00:03:27.790 10:13:53 make -- common/autotest_common.sh@1124 -- $ make -j10 00:03:27.790 go: downloading golang.org/x/text v0.14.0 00:03:38.412 2024/06/10 10:14:02 error when reading a file at path: /home/vagrant/spdk_repo/spdk/schema/rpc.json, err: open /home/vagrant/spdk_repo/spdk/schema/rpc.json: no such file or directory 00:03:38.412 make[1]: *** [Makefile:27: structs] Error 1 00:03:38.412 make: *** [/home/vagrant/spdk_repo/spdk/mk/spdk.subdirs.mk:16: go/rpc] Error 2 00:03:38.412 make: *** Waiting for unfinished jobs.... 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1124 -- $ trap - ERR 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1124 -- $ print_backtrace 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1152 -- $ [[ ehxBET =~ e ]] 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1154 -- $ args=('-j10' 'make' 'make' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1154 -- $ local args 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1156 -- $ xtrace_disable 00:03:56.501 10:14:20 make -- common/autotest_common.sh@10 -- $ set +x 00:03:56.501 ========== Backtrace start: ========== 00:03:56.501 00:03:56.501 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1124 -> run_test(["make"],["make"],["-j10"]) 00:03:56.501 ... 00:03:56.501 1119 timing_enter $test_name 00:03:56.501 1120 echo "************************************" 00:03:56.501 1121 echo "START TEST $test_name" 00:03:56.501 1122 echo "************************************" 00:03:56.501 1123 xtrace_restore 00:03:56.501 1124 time "$@" 00:03:56.501 1125 xtrace_disable 00:03:56.501 1126 echo "************************************" 00:03:56.501 1127 echo "END TEST $test_name" 00:03:56.501 1128 echo "************************************" 00:03:56.501 1129 timing_exit $test_name 00:03:56.501 ... 00:03:56.501 in /home/vagrant/spdk_repo/spdk/autobuild.sh:69 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:03:56.501 ... 00:03:56.501 64 $rootdir/configure $config_params 00:03:56.501 65 else 00:03:56.501 66 # if we aren't testing the unittests, build with shared objects. 00:03:56.501 67 $rootdir/configure $config_params --with-shared 00:03:56.501 68 fi 00:03:56.501 => 69 run_test "make" $MAKE $MAKEFLAGS 00:03:56.501 70 fi 00:03:56.501 ... 00:03:56.501 00:03:56.501 ========== Backtrace end ========== 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1193 -- $ return 0 00:03:56.501 00:03:56.501 real 0m26.652s 00:03:56.501 user 2m43.576s 00:03:56.501 sys 0m28.619s 00:03:56.501 10:14:20 make -- common/autotest_common.sh@1 -- $ stop_monitor_resources 00:03:56.501 10:14:20 make -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:56.501 10:14:20 make -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:56.501 10:14:20 make -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.501 10:14:20 make -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:56.502 10:14:20 make -- pm/common@44 -- $ pid=5196 00:03:56.502 10:14:20 make -- pm/common@50 -- $ kill -TERM 5196 00:03:56.502 10:14:20 make -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:56.502 10:14:20 make -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:56.502 10:14:20 make -- pm/common@44 -- $ pid=5197 00:03:56.502 10:14:20 make -- pm/common@50 -- $ kill -TERM 5197 00:03:56.511 [Pipeline] } 00:03:56.525 [Pipeline] // timeout 00:03:56.530 [Pipeline] } 00:03:56.546 [Pipeline] // stage 00:03:56.553 [Pipeline] } 00:03:56.557 ERROR: script returned exit code 2 00:03:56.557 Setting overall build result to FAILURE 00:03:56.575 [Pipeline] // catchError 00:03:56.583 [Pipeline] stage 00:03:56.585 [Pipeline] { (Stop VM) 00:03:56.600 [Pipeline] sh 00:03:56.878 + vagrant halt 00:04:01.066 ==> default: Halting domain... 00:04:07.674 [Pipeline] sh 00:04:07.955 + vagrant destroy -f 00:04:12.142 ==> default: Removing domain... 00:04:12.155 [Pipeline] sh 00:04:12.433 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:04:12.443 [Pipeline] } 00:04:12.462 [Pipeline] // stage 00:04:12.468 [Pipeline] } 00:04:12.487 [Pipeline] // dir 00:04:12.491 [Pipeline] } 00:04:12.505 [Pipeline] // wrap 00:04:12.512 [Pipeline] } 00:04:12.526 [Pipeline] // catchError 00:04:12.536 [Pipeline] stage 00:04:12.538 [Pipeline] { (Epilogue) 00:04:12.550 [Pipeline] sh 00:04:12.833 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:04:12.845 [Pipeline] catchError 00:04:12.847 [Pipeline] { 00:04:12.862 [Pipeline] sh 00:04:13.186 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:04:13.186 Artifacts sizes are good 00:04:13.196 [Pipeline] } 00:04:13.214 [Pipeline] // catchError 00:04:13.225 [Pipeline] archiveArtifacts 00:04:13.232 Archiving artifacts 00:04:13.262 [Pipeline] cleanWs 00:04:13.272 [WS-CLEANUP] Deleting project workspace... 00:04:13.272 [WS-CLEANUP] Deferred wipeout is used... 00:04:13.277 [WS-CLEANUP] done 00:04:13.279 [Pipeline] } 00:04:13.296 [Pipeline] // stage 00:04:13.302 [Pipeline] } 00:04:13.318 [Pipeline] // node 00:04:13.323 [Pipeline] End of Pipeline 00:04:13.348 Finished: FAILURE