00:00:00.002 Started by upstream project "autotest-nightly" build number 3339 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 2733 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.115 The recommended git tool is: git 00:00:00.115 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.183 > git --version # 'git version 2.39.2' 00:00:00.183 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.183 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.183 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.985 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.995 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.006 Checking out Revision 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 (FETCH_HEAD) 00:00:06.006 > git config core.sparsecheckout # timeout=10 00:00:06.016 > git read-tree -mu HEAD # timeout=10 00:00:06.031 > git checkout -f 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=5 00:00:06.048 Commit message: "jenkins/check-jenkins-labels: add ExtraStorage label" 00:00:06.048 > git rev-list --no-walk 10b73a6b8d61c05f3981f9d6fab712fcdadeb236 # timeout=10 00:00:06.122 [Pipeline] Start of Pipeline 00:00:06.130 [Pipeline] library 00:00:06.131 Loading library shm_lib@master 00:00:06.131 Library shm_lib@master is cached. Copying from home. 00:00:06.143 [Pipeline] node 00:00:06.151 Running on VM-host-SM4 in /var/jenkins/workspace/freebsd-vg-autotest 00:00:06.152 [Pipeline] { 00:00:06.162 [Pipeline] catchError 00:00:06.163 [Pipeline] { 00:00:06.174 [Pipeline] wrap 00:00:06.180 [Pipeline] { 00:00:06.185 [Pipeline] stage 00:00:06.186 [Pipeline] { (Prologue) 00:00:06.199 [Pipeline] echo 00:00:06.199 Node: VM-host-SM4 00:00:06.204 [Pipeline] cleanWs 00:00:06.211 [WS-CLEANUP] Deleting project workspace... 00:00:06.211 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.216 [WS-CLEANUP] done 00:00:06.364 [Pipeline] setCustomBuildProperty 00:00:06.427 [Pipeline] nodesByLabel 00:00:06.428 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.435 [Pipeline] httpRequest 00:00:06.438 HttpMethod: GET 00:00:06.439 URL: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:06.439 Sending request to url: http://10.211.11.40/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:06.465 Response Code: HTTP/1.1 200 OK 00:00:06.465 Success: Status code 200 is in the accepted range: 200,404 00:00:06.466 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:31.888 [Pipeline] sh 00:00:32.171 + tar --no-same-owner -xf jbp_10b73a6b8d61c05f3981f9d6fab712fcdadeb236.tar.gz 00:00:32.199 [Pipeline] httpRequest 00:00:32.203 HttpMethod: GET 00:00:32.204 URL: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:32.207 Sending request to url: http://10.211.11.40/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:00:32.223 Response Code: HTTP/1.1 200 OK 00:00:32.223 Success: Status code 200 is in the accepted range: 200,404 00:00:32.224 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:01:09.749 [Pipeline] sh 00:01:10.039 + tar --no-same-owner -xf spdk_aa824ae66823f5ea665c4713c1fa0c6963b5c3b2.tar.gz 00:01:12.610 [Pipeline] sh 00:01:12.888 + git -C spdk log --oneline -n5 00:01:12.888 aa824ae66 bdevperf: remove max io size limit for verify 00:01:12.888 161ef3f54 scripts/perf: Rename vhost_*master_core to vhost_*main_core 00:01:12.888 8bba6ed63 fuzz/llvm_vfio_fuzz: Adjust array index to avoid overflow 00:01:12.888 387dbedc4 env_dpdk: fix build with OpenSSL < 3.0.0 00:01:12.888 2b5de63c1 include: ensure ENOKEY is defined on FreeBSD 00:01:12.902 [Pipeline] writeFile 00:01:12.915 [Pipeline] sh 00:01:13.197 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:13.208 [Pipeline] sh 00:01:13.487 + cat autorun-spdk.conf 00:01:13.487 RUN_NIGHTLY=1 00:01:13.487 SPDK_TEST_UNITTEST=1 00:01:13.487 SPDK_RUN_VALGRIND=0 00:01:13.487 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.487 SPDK_TEST_NVME=1 00:01:13.487 SPDK_TEST_BLOCKDEV=1 00:01:13.494 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.498 [Pipeline] } 00:01:13.513 [Pipeline] // stage 00:01:13.527 [Pipeline] stage 00:01:13.528 [Pipeline] { (Run VM) 00:01:13.546 [Pipeline] sh 00:01:13.855 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:13.855 + echo 'Start stage prepare_nvme.sh' 00:01:13.855 Start stage prepare_nvme.sh 00:01:13.855 + [[ -n 8 ]] 00:01:13.855 + disk_prefix=ex8 00:01:13.855 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:01:13.855 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:01:13.855 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:01:13.855 ++ RUN_NIGHTLY=1 00:01:13.855 ++ SPDK_TEST_UNITTEST=1 00:01:13.855 ++ SPDK_RUN_VALGRIND=0 00:01:13.855 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.855 ++ SPDK_TEST_NVME=1 00:01:13.855 ++ SPDK_TEST_BLOCKDEV=1 00:01:13.855 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.855 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:01:13.855 + nvme_files=() 00:01:13.855 + declare -A nvme_files 00:01:13.855 + backend_dir=/var/lib/libvirt/images/backends 00:01:13.855 + nvme_files['nvme.img']=5G 00:01:13.855 + nvme_files['nvme-cmb.img']=5G 00:01:13.855 + nvme_files['nvme-multi0.img']=4G 00:01:13.855 + nvme_files['nvme-multi1.img']=4G 00:01:13.855 + nvme_files['nvme-multi2.img']=4G 00:01:13.855 + nvme_files['nvme-openstack.img']=8G 00:01:13.855 + nvme_files['nvme-zns.img']=5G 00:01:13.855 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:13.855 + (( SPDK_TEST_FTL == 1 )) 00:01:13.855 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:13.855 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:13.855 + for nvme in "${!nvme_files[@]}" 00:01:13.855 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:13.855 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.855 + for nvme in "${!nvme_files[@]}" 00:01:13.855 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:13.855 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.855 + for nvme in "${!nvme_files[@]}" 00:01:13.855 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:13.855 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.855 + for nvme in "${!nvme_files[@]}" 00:01:13.855 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:14.114 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.114 + for nvme in "${!nvme_files[@]}" 00:01:14.114 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:14.114 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.114 + for nvme in "${!nvme_files[@]}" 00:01:14.114 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:14.114 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.114 + for nvme in "${!nvme_files[@]}" 00:01:14.114 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:14.373 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.373 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:14.373 + echo 'End stage prepare_nvme.sh' 00:01:14.373 End stage prepare_nvme.sh 00:01:14.383 [Pipeline] sh 00:01:14.663 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.663 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -H -a -v -f freebsd13 00:01:14.663 00:01:14.663 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:01:14.663 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:01:14.663 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:01:14.663 HELP=0 00:01:14.663 DRY_RUN=0 00:01:14.663 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img, 00:01:14.664 NVME_DISKS_TYPE=nvme, 00:01:14.664 NVME_AUTO_CREATE=0 00:01:14.664 NVME_DISKS_NAMESPACES=, 00:01:14.664 NVME_CMB=, 00:01:14.664 NVME_PMR=, 00:01:14.664 NVME_ZNS=, 00:01:14.664 NVME_MS=, 00:01:14.664 NVME_FDP=, 00:01:14.664 SPDK_VAGRANT_DISTRO=freebsd13 00:01:14.664 SPDK_VAGRANT_VMCPU=10 00:01:14.664 SPDK_VAGRANT_VMRAM=12288 00:01:14.664 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.664 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.664 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.664 SPDK_OPENSTACK_NETWORK=0 00:01:14.664 VAGRANT_PACKAGE_BOX=0 00:01:14.664 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.664 FORCE_DISTRO=true 00:01:14.664 VAGRANT_BOX_VERSION= 00:01:14.664 EXTRA_VAGRANTFILES= 00:01:14.664 NIC_MODEL=e1000 00:01:14.664 00:01:14.664 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt' 00:01:14.664 /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:01:17.203 Bringing machine 'default' up with 'libvirt' provider... 00:01:17.772 ==> default: Creating image (snapshot of base box volume). 00:01:17.772 ==> default: Creating domain with the following settings... 00:01:17.772 ==> default: -- Name: freebsd13-13.2-RELEASE-1707898352-2154_default_1707937314_72d283228fc62e92c746 00:01:17.772 ==> default: -- Domain type: kvm 00:01:17.772 ==> default: -- Cpus: 10 00:01:17.772 ==> default: -- Feature: acpi 00:01:17.772 ==> default: -- Feature: apic 00:01:17.772 ==> default: -- Feature: pae 00:01:17.772 ==> default: -- Memory: 12288M 00:01:17.772 ==> default: -- Memory Backing: hugepages: 00:01:17.772 ==> default: -- Management MAC: 00:01:17.772 ==> default: -- Loader: 00:01:17.772 ==> default: -- Nvram: 00:01:17.772 ==> default: -- Base box: spdk/freebsd13 00:01:17.772 ==> default: -- Storage pool: default 00:01:17.772 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1707898352-2154_default_1707937314_72d283228fc62e92c746.img (32G) 00:01:17.772 ==> default: -- Volume Cache: default 00:01:17.772 ==> default: -- Kernel: 00:01:17.772 ==> default: -- Initrd: 00:01:17.772 ==> default: -- Graphics Type: vnc 00:01:17.772 ==> default: -- Graphics Port: -1 00:01:17.772 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.772 ==> default: -- Graphics Password: Not defined 00:01:17.772 ==> default: -- Video Type: cirrus 00:01:17.772 ==> default: -- Video VRAM: 9216 00:01:17.772 ==> default: -- Sound Type: 00:01:17.772 ==> default: -- Keymap: en-us 00:01:17.772 ==> default: -- TPM Path: 00:01:17.772 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.772 ==> default: -- Command line args: 00:01:17.772 ==> default: -> value=-device, 00:01:17.772 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:17.772 ==> default: -> value=-drive, 00:01:17.772 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:01:17.772 ==> default: -> value=-device, 00:01:17.772 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.772 ==> default: Creating shared folders metadata... 00:01:17.772 ==> default: Starting domain. 00:01:20.318 ==> default: Waiting for domain to get an IP address... 00:01:46.928 ==> default: Waiting for SSH to become available... 00:01:59.141 ==> default: Configuring and enabling network interfaces... 00:02:01.674 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:11.663 ==> default: Mounting SSHFS shared folder... 00:02:12.602 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.602 ==> default: Checking Mount.. 00:02:13.171 ==> default: Folder Successfully Mounted! 00:02:13.171 ==> default: Running provisioner: file... 00:02:13.741 default: ~/.gitconfig => .gitconfig 00:02:14.001 00:02:14.001 SUCCESS! 00:02:14.001 00:02:14.001 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt and type "vagrant ssh" to use. 00:02:14.001 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:14.001 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt" to destroy all trace of vm. 00:02:14.001 00:02:14.011 [Pipeline] } 00:02:14.032 [Pipeline] // stage 00:02:14.041 [Pipeline] dir 00:02:14.041 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt 00:02:14.043 [Pipeline] { 00:02:14.056 [Pipeline] catchError 00:02:14.058 [Pipeline] { 00:02:14.072 [Pipeline] sh 00:02:14.353 + vagrant ssh-config --host vagrant 00:02:14.353 + sed -ne /^Host/,$p 00:02:14.353 + tee ssh_conf 00:02:17.642 Host vagrant 00:02:17.642 HostName 192.168.121.75 00:02:17.642 User vagrant 00:02:17.642 Port 22 00:02:17.642 UserKnownHostsFile /dev/null 00:02:17.642 StrictHostKeyChecking no 00:02:17.642 PasswordAuthentication no 00:02:17.642 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1707898352-2154/libvirt/freebsd13 00:02:17.642 IdentitiesOnly yes 00:02:17.642 LogLevel FATAL 00:02:17.642 ForwardAgent yes 00:02:17.642 ForwardX11 yes 00:02:17.642 00:02:17.656 [Pipeline] withEnv 00:02:17.658 [Pipeline] { 00:02:17.674 [Pipeline] sh 00:02:17.956 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:17.956 source /etc/os-release 00:02:17.956 [[ -e /image.version ]] && img=$(< /image.version) 00:02:17.956 # Minimal, systemd-like check. 00:02:17.956 if [[ -e /.dockerenv ]]; then 00:02:17.956 # Clear garbage from the node's name: 00:02:17.956 # agt-er_autotest_547-896 -> autotest_547-896 00:02:17.956 agent=${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:17.956 if mountpoint -q /etc/hostname; then 00:02:17.956 # We can assume this is a mount from a host where container is running, 00:02:17.956 # so fetch its hostname to easily identify the target swarm worker. 00:02:17.956 container="$(< /etc/hostname) ($agent)" 00:02:17.956 else 00:02:17.956 # Fallback 00:02:17.956 container=$agent 00:02:17.956 fi 00:02:17.956 fi 00:02:17.956 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:17.956 00:02:17.967 [Pipeline] } 00:02:17.989 [Pipeline] // withEnv 00:02:17.997 [Pipeline] setCustomBuildProperty 00:02:18.010 [Pipeline] stage 00:02:18.013 [Pipeline] { (Tests) 00:02:18.031 [Pipeline] sh 00:02:18.316 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:18.341 [Pipeline] timeout 00:02:18.341 Timeout set to expire in 1 hr 0 min 00:02:18.343 [Pipeline] { 00:02:18.358 [Pipeline] sh 00:02:18.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:19.208 HEAD is now at aa824ae66 bdevperf: remove max io size limit for verify 00:02:19.222 [Pipeline] sh 00:02:19.503 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.518 [Pipeline] sh 00:02:19.799 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.815 [Pipeline] sh 00:02:20.096 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang ./autoruner.sh spdk_repo 00:02:20.097 ++ readlink -f spdk_repo 00:02:20.097 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:02:20.097 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:02:20.097 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:02:20.097 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:02:20.097 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:02:20.097 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:02:20.097 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:02:20.097 + cd /usr/home/vagrant/spdk_repo 00:02:20.097 + source /etc/os-release 00:02:20.097 ++ NAME=FreeBSD 00:02:20.097 ++ VERSION=13.2-RELEASE 00:02:20.097 ++ VERSION_ID=13.2 00:02:20.097 ++ ID=freebsd 00:02:20.097 ++ ANSI_COLOR='0;31' 00:02:20.097 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:02:20.097 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:02:20.097 ++ HOME_URL=https://FreeBSD.org/ 00:02:20.097 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:20.097 + uname -a 00:02:20.097 FreeBSD freebsd-cloud-1707898352-2154.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:02:20.097 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.356 Contigmem (not present) 00:02:20.356 Buffer Size: not set 00:02:20.356 Num Buffers: not set 00:02:20.356 00:02:20.356 00:02:20.356 Type BDF Vendor Device Driver 00:02:20.356 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:02:20.356 + rm -f /tmp/spdk-ld-path 00:02:20.356 + source autorun-spdk.conf 00:02:20.356 ++ RUN_NIGHTLY=1 00:02:20.356 ++ SPDK_TEST_UNITTEST=1 00:02:20.356 ++ SPDK_RUN_VALGRIND=0 00:02:20.356 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.356 ++ SPDK_TEST_NVME=1 00:02:20.356 ++ SPDK_TEST_BLOCKDEV=1 00:02:20.356 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.357 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.357 + [[ -n '' ]] 00:02:20.357 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:02:20.357 + for M in /var/spdk/build-*-manifest.txt 00:02:20.357 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.357 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:02:20.357 + for M in /var/spdk/build-*-manifest.txt 00:02:20.357 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.357 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:02:20.357 ++ uname 00:02:20.357 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:20.357 + dmesg_pid=1267 00:02:20.357 + tail -F /var/log/messages 00:02:20.357 + [[ FreeBSD == FreeBSD ]] 00:02:20.357 + export LC_ALL=C LC_CTYPE=C 00:02:20.357 + LC_ALL=C 00:02:20.357 + LC_CTYPE=C 00:02:20.357 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.357 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.357 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.357 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.357 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.357 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.357 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.357 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:20.357 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:20.357 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:20.357 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.357 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.357 Test configuration: 00:02:20.357 RUN_NIGHTLY=1 00:02:20.357 SPDK_TEST_UNITTEST=1 00:02:20.357 SPDK_RUN_VALGRIND=0 00:02:20.357 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.357 SPDK_TEST_NVME=1 00:02:20.357 SPDK_TEST_BLOCKDEV=1 00:02:20.357 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 19:02:57 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.357 19:02:57 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.357 19:02:57 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.357 19:02:57 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.357 19:02:57 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:20.357 19:02:57 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:20.357 19:02:57 -- paths/export.sh@4 -- $ export PATH 00:02:20.357 19:02:57 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:20.357 19:02:57 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:02:20.616 19:02:57 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:20.616 19:02:57 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707937377.XXXXXX 00:02:20.616 19:02:57 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707937377.XXXXXX.YQmlXlFk 00:02:20.616 19:02:57 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:20.616 19:02:57 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:20.616 19:02:57 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:02:20.616 19:02:57 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.616 19:02:57 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.616 19:02:57 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:20.616 19:02:57 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:02:20.616 19:02:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.616 19:02:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:20.616 19:02:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:20.616 19:02:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:20.616 19:02:57 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:02:20.616 19:02:57 -- spdk/autobuild.sh@16 -- $ date -u 00:02:20.616 Wed Feb 14 19:02:57 UTC 2024 00:02:20.616 19:02:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:20.616 v24.05-pre-81-gaa824ae66 00:02:20.616 19:02:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:20.616 19:02:57 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:20.616 19:02:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:20.616 19:02:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:20.616 19:02:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:20.616 19:02:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:20.616 19:02:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:20.616 19:02:57 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:20.616 19:02:57 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:20.616 19:02:57 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:20.616 19:02:57 -- common/autotest_common.sh@1075 -- $ '[' 2 -le 1 ']' 00:02:20.616 19:02:57 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:02:20.616 19:02:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.616 ************************************ 00:02:20.616 START TEST unittest_build 00:02:20.616 ************************************ 00:02:20.616 19:02:57 -- common/autotest_common.sh@1102 -- $ _unittest_build 00:02:20.616 19:02:57 -- common/autobuild_common.sh@402 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:21.553 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:21.553 are only supported on Linux. Turning off default feature. 00:02:21.553 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:21.553 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.488 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:22.488 Using 'verbs' RDMA provider 00:02:34.719 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:44.692 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:44.692 Creating mk/config.mk...done. 00:02:44.692 Creating mk/cc.flags.mk...done. 00:02:44.692 Type 'gmake' to build. 00:02:44.692 19:03:22 -- common/autobuild_common.sh@403 -- $ gmake -j10 00:02:44.951 gmake[1]: Nothing to be done for 'all'. 00:02:49.161 ps: stdin: not a terminal 00:02:53.354 The Meson build system 00:02:53.354 Version: 1.3.1 00:02:53.354 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:02:53.354 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:53.354 Build type: native build 00:02:53.354 Program cat found: YES (/bin/cat) 00:02:53.354 Project name: DPDK 00:02:53.354 Project version: 23.11.0 00:02:53.354 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:02:53.354 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:02:53.354 Host machine cpu family: x86_64 00:02:53.354 Host machine cpu: x86_64 00:02:53.354 Message: ## Building in Developer Mode ## 00:02:53.354 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:53.354 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:53.354 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:53.354 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:53.354 Program cat found: YES (/bin/cat) 00:02:53.354 Compiler for C supports arguments -march=native: YES 00:02:53.354 Checking for size of "void *" : 8 00:02:53.354 Checking for size of "void *" : 8 (cached) 00:02:53.354 Library m found: YES 00:02:53.354 Library numa found: NO 00:02:53.354 Library fdt found: NO 00:02:53.354 Library execinfo found: YES 00:02:53.354 Has header "execinfo.h" : YES 00:02:53.354 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:02:53.354 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:53.354 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:53.354 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:53.354 Run-time dependency openssl found: YES 3.0.13 00:02:53.354 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:53.354 Library pcap found: YES 00:02:53.354 Has header "pcap.h" with dependency -lpcap: YES 00:02:53.354 Compiler for C supports arguments -Wcast-qual: YES 00:02:53.354 Compiler for C supports arguments -Wdeprecated: YES 00:02:53.354 Compiler for C supports arguments -Wformat: YES 00:02:53.355 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:53.355 Compiler for C supports arguments -Wformat-security: YES 00:02:53.355 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.355 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:53.355 Compiler for C supports arguments -Wnested-externs: YES 00:02:53.355 Compiler for C supports arguments -Wold-style-definition: YES 00:02:53.355 Compiler for C supports arguments -Wpointer-arith: YES 00:02:53.355 Compiler for C supports arguments -Wsign-compare: YES 00:02:53.355 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:53.355 Compiler for C supports arguments -Wundef: YES 00:02:53.355 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.355 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:53.355 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:53.355 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.355 Compiler for C supports arguments -mavx512f: YES 00:02:53.355 Checking if "AVX512 checking" compiles: YES 00:02:53.355 Fetching value of define "__SSE4_2__" : 1 00:02:53.355 Fetching value of define "__AES__" : 1 00:02:53.355 Fetching value of define "__AVX__" : 1 00:02:53.355 Fetching value of define "__AVX2__" : 1 00:02:53.355 Fetching value of define "__AVX512BW__" : 1 00:02:53.355 Fetching value of define "__AVX512CD__" : 1 00:02:53.355 Fetching value of define "__AVX512DQ__" : 1 00:02:53.355 Fetching value of define "__AVX512F__" : 1 00:02:53.355 Fetching value of define "__AVX512VL__" : 1 00:02:53.355 Fetching value of define "__PCLMUL__" : 1 00:02:53.355 Fetching value of define "__RDRND__" : 1 00:02:53.355 Fetching value of define "__RDSEED__" : 1 00:02:53.355 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:53.355 Fetching value of define "__znver1__" : (undefined) 00:02:53.355 Fetching value of define "__znver2__" : (undefined) 00:02:53.355 Fetching value of define "__znver3__" : (undefined) 00:02:53.355 Fetching value of define "__znver4__" : (undefined) 00:02:53.355 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:53.355 Message: lib/log: Defining dependency "log" 00:02:53.355 Message: lib/kvargs: Defining dependency "kvargs" 00:02:53.355 Message: lib/telemetry: Defining dependency "telemetry" 00:02:53.355 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:53.355 Checking for function "getentropy" : YES 00:02:53.355 Message: lib/eal: Defining dependency "eal" 00:02:53.355 Message: lib/ring: Defining dependency "ring" 00:02:53.355 Message: lib/rcu: Defining dependency "rcu" 00:02:53.355 Message: lib/mempool: Defining dependency "mempool" 00:02:53.355 Message: lib/mbuf: Defining dependency "mbuf" 00:02:53.355 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:53.355 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:53.355 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:53.355 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:53.355 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:53.355 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:53.355 Compiler for C supports arguments -mpclmul: YES 00:02:53.355 Compiler for C supports arguments -maes: YES 00:02:53.355 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.355 Compiler for C supports arguments -mavx512bw: YES 00:02:53.355 Compiler for C supports arguments -mavx512dq: YES 00:02:53.355 Compiler for C supports arguments -mavx512vl: YES 00:02:53.355 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:53.355 Compiler for C supports arguments -mavx2: YES 00:02:53.355 Compiler for C supports arguments -mavx: YES 00:02:53.355 Message: lib/net: Defining dependency "net" 00:02:53.355 Message: lib/meter: Defining dependency "meter" 00:02:53.355 Message: lib/ethdev: Defining dependency "ethdev" 00:02:53.355 Message: lib/pci: Defining dependency "pci" 00:02:53.355 Message: lib/cmdline: Defining dependency "cmdline" 00:02:53.355 Message: lib/hash: Defining dependency "hash" 00:02:53.355 Message: lib/timer: Defining dependency "timer" 00:02:53.355 Message: lib/compressdev: Defining dependency "compressdev" 00:02:53.355 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:53.355 Message: lib/dmadev: Defining dependency "dmadev" 00:02:53.355 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:53.355 Message: lib/reorder: Defining dependency "reorder" 00:02:53.355 Message: lib/security: Defining dependency "security" 00:02:53.355 Has header "linux/userfaultfd.h" : NO 00:02:53.355 Has header "linux/vduse.h" : NO 00:02:53.355 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:53.355 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:53.355 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:53.355 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:53.355 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:53.355 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:53.355 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:53.355 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:53.355 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:53.355 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:53.355 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:53.355 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:53.355 Configuring doxy-api-html.conf using configuration 00:02:53.355 Configuring doxy-api-man.conf using configuration 00:02:53.355 Program mandb found: NO 00:02:53.355 Program sphinx-build found: NO 00:02:53.355 Configuring rte_build_config.h using configuration 00:02:53.355 Message: 00:02:53.355 ================= 00:02:53.355 Applications Enabled 00:02:53.355 ================= 00:02:53.355 00:02:53.355 apps: 00:02:53.355 00:02:53.355 00:02:53.355 Message: 00:02:53.355 ================= 00:02:53.355 Libraries Enabled 00:02:53.355 ================= 00:02:53.355 00:02:53.355 libs: 00:02:53.355 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:53.355 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:53.355 cryptodev, dmadev, reorder, security, 00:02:53.355 00:02:53.355 Message: 00:02:53.355 =============== 00:02:53.355 Drivers Enabled 00:02:53.355 =============== 00:02:53.355 00:02:53.355 common: 00:02:53.355 00:02:53.355 bus: 00:02:53.355 pci, vdev, 00:02:53.355 mempool: 00:02:53.355 ring, 00:02:53.355 dma: 00:02:53.355 00:02:53.355 net: 00:02:53.355 00:02:53.355 crypto: 00:02:53.355 00:02:53.355 compress: 00:02:53.355 00:02:53.355 00:02:53.355 Message: 00:02:53.355 ================= 00:02:53.355 Content Skipped 00:02:53.355 ================= 00:02:53.355 00:02:53.355 apps: 00:02:53.355 dumpcap: explicitly disabled via build config 00:02:53.355 graph: explicitly disabled via build config 00:02:53.355 pdump: explicitly disabled via build config 00:02:53.355 proc-info: explicitly disabled via build config 00:02:53.355 test-acl: explicitly disabled via build config 00:02:53.355 test-bbdev: explicitly disabled via build config 00:02:53.355 test-cmdline: explicitly disabled via build config 00:02:53.355 test-compress-perf: explicitly disabled via build config 00:02:53.355 test-crypto-perf: explicitly disabled via build config 00:02:53.355 test-dma-perf: explicitly disabled via build config 00:02:53.355 test-eventdev: explicitly disabled via build config 00:02:53.355 test-fib: explicitly disabled via build config 00:02:53.355 test-flow-perf: explicitly disabled via build config 00:02:53.355 test-gpudev: explicitly disabled via build config 00:02:53.355 test-mldev: explicitly disabled via build config 00:02:53.355 test-pipeline: explicitly disabled via build config 00:02:53.355 test-pmd: explicitly disabled via build config 00:02:53.355 test-regex: explicitly disabled via build config 00:02:53.355 test-sad: explicitly disabled via build config 00:02:53.355 test-security-perf: explicitly disabled via build config 00:02:53.355 00:02:53.355 libs: 00:02:53.355 metrics: explicitly disabled via build config 00:02:53.355 acl: explicitly disabled via build config 00:02:53.355 bbdev: explicitly disabled via build config 00:02:53.355 bitratestats: explicitly disabled via build config 00:02:53.355 bpf: explicitly disabled via build config 00:02:53.355 cfgfile: explicitly disabled via build config 00:02:53.355 distributor: explicitly disabled via build config 00:02:53.355 efd: explicitly disabled via build config 00:02:53.355 eventdev: explicitly disabled via build config 00:02:53.355 dispatcher: explicitly disabled via build config 00:02:53.355 gpudev: explicitly disabled via build config 00:02:53.355 gro: explicitly disabled via build config 00:02:53.355 gso: explicitly disabled via build config 00:02:53.355 ip_frag: explicitly disabled via build config 00:02:53.355 jobstats: explicitly disabled via build config 00:02:53.355 latencystats: explicitly disabled via build config 00:02:53.355 lpm: explicitly disabled via build config 00:02:53.355 member: explicitly disabled via build config 00:02:53.355 pcapng: explicitly disabled via build config 00:02:53.355 power: only supported on Linux 00:02:53.355 rawdev: explicitly disabled via build config 00:02:53.355 regexdev: explicitly disabled via build config 00:02:53.355 mldev: explicitly disabled via build config 00:02:53.355 rib: explicitly disabled via build config 00:02:53.355 sched: explicitly disabled via build config 00:02:53.355 stack: explicitly disabled via build config 00:02:53.355 vhost: only supported on Linux 00:02:53.355 ipsec: explicitly disabled via build config 00:02:53.355 pdcp: explicitly disabled via build config 00:02:53.355 fib: explicitly disabled via build config 00:02:53.355 port: explicitly disabled via build config 00:02:53.355 pdump: explicitly disabled via build config 00:02:53.355 table: explicitly disabled via build config 00:02:53.355 pipeline: explicitly disabled via build config 00:02:53.355 graph: explicitly disabled via build config 00:02:53.355 node: explicitly disabled via build config 00:02:53.355 00:02:53.355 drivers: 00:02:53.355 common/cpt: not in enabled drivers build config 00:02:53.355 common/dpaax: not in enabled drivers build config 00:02:53.355 common/iavf: not in enabled drivers build config 00:02:53.355 common/idpf: not in enabled drivers build config 00:02:53.355 common/mvep: not in enabled drivers build config 00:02:53.356 common/octeontx: not in enabled drivers build config 00:02:53.356 bus/auxiliary: not in enabled drivers build config 00:02:53.356 bus/cdx: not in enabled drivers build config 00:02:53.356 bus/dpaa: not in enabled drivers build config 00:02:53.356 bus/fslmc: not in enabled drivers build config 00:02:53.356 bus/ifpga: not in enabled drivers build config 00:02:53.356 bus/platform: not in enabled drivers build config 00:02:53.356 bus/vmbus: not in enabled drivers build config 00:02:53.356 common/cnxk: not in enabled drivers build config 00:02:53.356 common/mlx5: not in enabled drivers build config 00:02:53.356 common/nfp: not in enabled drivers build config 00:02:53.356 common/qat: not in enabled drivers build config 00:02:53.356 common/sfc_efx: not in enabled drivers build config 00:02:53.356 mempool/bucket: not in enabled drivers build config 00:02:53.356 mempool/cnxk: not in enabled drivers build config 00:02:53.356 mempool/dpaa: not in enabled drivers build config 00:02:53.356 mempool/dpaa2: not in enabled drivers build config 00:02:53.356 mempool/octeontx: not in enabled drivers build config 00:02:53.356 mempool/stack: not in enabled drivers build config 00:02:53.356 dma/cnxk: not in enabled drivers build config 00:02:53.356 dma/dpaa: not in enabled drivers build config 00:02:53.356 dma/dpaa2: not in enabled drivers build config 00:02:53.356 dma/hisilicon: not in enabled drivers build config 00:02:53.356 dma/idxd: not in enabled drivers build config 00:02:53.356 dma/ioat: not in enabled drivers build config 00:02:53.356 dma/skeleton: not in enabled drivers build config 00:02:53.356 net/af_packet: not in enabled drivers build config 00:02:53.356 net/af_xdp: not in enabled drivers build config 00:02:53.356 net/ark: not in enabled drivers build config 00:02:53.356 net/atlantic: not in enabled drivers build config 00:02:53.356 net/avp: not in enabled drivers build config 00:02:53.356 net/axgbe: not in enabled drivers build config 00:02:53.356 net/bnx2x: not in enabled drivers build config 00:02:53.356 net/bnxt: not in enabled drivers build config 00:02:53.356 net/bonding: not in enabled drivers build config 00:02:53.356 net/cnxk: not in enabled drivers build config 00:02:53.356 net/cpfl: not in enabled drivers build config 00:02:53.356 net/cxgbe: not in enabled drivers build config 00:02:53.356 net/dpaa: not in enabled drivers build config 00:02:53.356 net/dpaa2: not in enabled drivers build config 00:02:53.356 net/e1000: not in enabled drivers build config 00:02:53.356 net/ena: not in enabled drivers build config 00:02:53.356 net/enetc: not in enabled drivers build config 00:02:53.356 net/enetfec: not in enabled drivers build config 00:02:53.356 net/enic: not in enabled drivers build config 00:02:53.356 net/failsafe: not in enabled drivers build config 00:02:53.356 net/fm10k: not in enabled drivers build config 00:02:53.356 net/gve: not in enabled drivers build config 00:02:53.356 net/hinic: not in enabled drivers build config 00:02:53.356 net/hns3: not in enabled drivers build config 00:02:53.356 net/i40e: not in enabled drivers build config 00:02:53.356 net/iavf: not in enabled drivers build config 00:02:53.356 net/ice: not in enabled drivers build config 00:02:53.356 net/idpf: not in enabled drivers build config 00:02:53.356 net/igc: not in enabled drivers build config 00:02:53.356 net/ionic: not in enabled drivers build config 00:02:53.356 net/ipn3ke: not in enabled drivers build config 00:02:53.356 net/ixgbe: not in enabled drivers build config 00:02:53.356 net/mana: not in enabled drivers build config 00:02:53.356 net/memif: not in enabled drivers build config 00:02:53.356 net/mlx4: not in enabled drivers build config 00:02:53.356 net/mlx5: not in enabled drivers build config 00:02:53.356 net/mvneta: not in enabled drivers build config 00:02:53.356 net/mvpp2: not in enabled drivers build config 00:02:53.356 net/netvsc: not in enabled drivers build config 00:02:53.356 net/nfb: not in enabled drivers build config 00:02:53.356 net/nfp: not in enabled drivers build config 00:02:53.356 net/ngbe: not in enabled drivers build config 00:02:53.356 net/null: not in enabled drivers build config 00:02:53.356 net/octeontx: not in enabled drivers build config 00:02:53.356 net/octeon_ep: not in enabled drivers build config 00:02:53.356 net/pcap: not in enabled drivers build config 00:02:53.356 net/pfe: not in enabled drivers build config 00:02:53.356 net/qede: not in enabled drivers build config 00:02:53.356 net/ring: not in enabled drivers build config 00:02:53.356 net/sfc: not in enabled drivers build config 00:02:53.356 net/softnic: not in enabled drivers build config 00:02:53.356 net/tap: not in enabled drivers build config 00:02:53.356 net/thunderx: not in enabled drivers build config 00:02:53.356 net/txgbe: not in enabled drivers build config 00:02:53.356 net/vdev_netvsc: not in enabled drivers build config 00:02:53.356 net/vhost: not in enabled drivers build config 00:02:53.356 net/virtio: not in enabled drivers build config 00:02:53.356 net/vmxnet3: not in enabled drivers build config 00:02:53.356 raw/*: missing internal dependency, "rawdev" 00:02:53.356 crypto/armv8: not in enabled drivers build config 00:02:53.356 crypto/bcmfs: not in enabled drivers build config 00:02:53.356 crypto/caam_jr: not in enabled drivers build config 00:02:53.356 crypto/ccp: not in enabled drivers build config 00:02:53.356 crypto/cnxk: not in enabled drivers build config 00:02:53.356 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.356 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.356 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.356 crypto/mlx5: not in enabled drivers build config 00:02:53.356 crypto/mvsam: not in enabled drivers build config 00:02:53.356 crypto/nitrox: not in enabled drivers build config 00:02:53.356 crypto/null: not in enabled drivers build config 00:02:53.356 crypto/octeontx: not in enabled drivers build config 00:02:53.356 crypto/openssl: not in enabled drivers build config 00:02:53.356 crypto/scheduler: not in enabled drivers build config 00:02:53.356 crypto/uadk: not in enabled drivers build config 00:02:53.356 crypto/virtio: not in enabled drivers build config 00:02:53.356 compress/isal: not in enabled drivers build config 00:02:53.356 compress/mlx5: not in enabled drivers build config 00:02:53.356 compress/octeontx: not in enabled drivers build config 00:02:53.356 compress/zlib: not in enabled drivers build config 00:02:53.356 regex/*: missing internal dependency, "regexdev" 00:02:53.356 ml/*: missing internal dependency, "mldev" 00:02:53.356 vdpa/*: missing internal dependency, "vhost" 00:02:53.356 event/*: missing internal dependency, "eventdev" 00:02:53.356 baseband/*: missing internal dependency, "bbdev" 00:02:53.356 gpu/*: missing internal dependency, "gpudev" 00:02:53.356 00:02:53.356 00:02:53.356 Build targets in project: 81 00:02:53.356 00:02:53.356 DPDK 23.11.0 00:02:53.356 00:02:53.356 User defined options 00:02:53.356 buildtype : debug 00:02:53.356 default_library : static 00:02:53.356 libdir : lib 00:02:53.356 prefix : / 00:02:53.356 c_args : -fPIC -Werror 00:02:53.356 c_link_args : 00:02:53.356 cpu_instruction_set: native 00:02:53.356 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:53.356 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:53.356 enable_docs : false 00:02:53.356 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:53.356 enable_kmods : true 00:02:53.356 tests : false 00:02:53.356 00:02:53.356 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:53.616 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:53.616 [1/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:53.875 [2/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:53.875 [3/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:53.875 [4/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.875 [5/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:54.134 [6/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.134 [7/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:54.134 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:54.134 [9/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:54.134 [10/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:54.134 [11/231] Linking static target lib/librte_log.a 00:02:54.134 [12/231] Linking static target lib/librte_kvargs.a 00:02:54.134 [13/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:54.393 [14/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:54.393 [15/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:54.393 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:54.393 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:54.393 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:54.393 [19/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:54.393 [20/231] Linking static target lib/librte_telemetry.a 00:02:54.393 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:54.652 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:54.652 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.652 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.652 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:54.652 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.911 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.911 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.911 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.911 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.911 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.911 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.911 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.911 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.911 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:55.170 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:55.170 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:55.170 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:55.170 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:55.430 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:55.430 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:55.430 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:55.430 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.430 [44/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:55.689 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:55.689 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:55.689 [47/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:55.689 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:55.689 [49/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:55.689 [50/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:55.689 [51/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:55.689 [52/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:55.948 [53/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:55.949 [54/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:55.949 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:55.949 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:55.949 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:55.949 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:55.949 [59/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:55.949 [60/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:56.212 [61/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:56.212 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:56.212 [63/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:56.212 [64/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:56.212 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:56.212 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:56.212 [67/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:56.212 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:56.471 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:56.471 [70/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:56.471 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:56.730 [72/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:56.730 [73/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.730 [74/231] Linking static target lib/librte_ring.a 00:02:56.730 [75/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.988 [76/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:56.988 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.988 [78/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.988 [79/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.988 [80/231] Linking static target lib/librte_rcu.a 00:02:56.988 [81/231] Linking static target lib/librte_eal.a 00:02:56.988 [82/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.988 [83/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.988 [84/231] Linking static target lib/librte_mempool.a 00:02:57.247 [85/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:57.247 [86/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:57.247 [87/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:57.247 [88/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:57.247 [89/231] Linking static target lib/librte_mbuf.a 00:02:57.505 [90/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.505 [91/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.505 [92/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.505 [93/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.763 [94/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.763 [95/231] Linking static target lib/librte_net.a 00:02:57.763 [96/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.763 [97/231] Linking static target lib/librte_meter.a 00:02:57.763 [98/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.022 [99/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.022 [100/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.022 [101/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:58.022 [102/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:58.281 [103/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:58.281 [104/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.281 [105/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.540 [106/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.540 [107/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.540 [108/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:58.798 [109/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:59.056 [110/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:59.056 [111/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:59.056 [112/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.056 [113/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:59.056 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.056 [115/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.056 [116/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:59.056 [117/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.056 [118/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.056 [119/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.056 [120/231] Linking static target lib/librte_pci.a 00:02:59.314 [121/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.314 [122/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.314 [123/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.314 [124/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.314 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.314 [126/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.314 [127/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.314 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.314 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.314 [130/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.572 [131/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.572 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.572 [133/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.572 [134/231] Linking static target lib/librte_ethdev.a 00:02:59.572 [135/231] Linking static target lib/librte_cmdline.a 00:02:59.572 [136/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.572 [137/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.572 [138/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:59.572 [139/231] Linking target lib/librte_log.so.24.0 00:02:59.830 [140/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:59.830 [141/231] Linking static target lib/librte_timer.a 00:02:59.830 [142/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.830 [143/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:59.830 [144/231] Linking target lib/librte_kvargs.so.24.0 00:02:59.830 [145/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:59.830 [146/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:59.830 [147/231] Linking static target lib/librte_hash.a 00:02:59.830 [148/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.088 [149/231] Linking target lib/librte_telemetry.so.24.0 00:03:00.088 [150/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.088 [151/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.088 [152/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:00.088 [153/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.088 [154/231] Linking static target lib/librte_compressdev.a 00:03:00.088 [155/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:00.088 [156/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.346 [157/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:00.346 [158/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.346 [159/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.346 [160/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:00.346 [161/231] Linking static target lib/librte_dmadev.a 00:03:00.346 [162/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.604 [163/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.604 [164/231] Linking static target lib/librte_reorder.a 00:03:00.604 [165/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.604 [166/231] Linking static target lib/librte_security.a 00:03:00.604 [167/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.862 [168/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.862 [169/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.862 [170/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.862 [171/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.862 [172/231] Linking static target lib/librte_cryptodev.a 00:03:00.862 [173/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.862 [174/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:03:00.862 [175/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:00.862 [176/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.862 [177/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.862 [178/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.862 [179/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.120 [180/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:01.120 [181/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:01.120 [182/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.120 [183/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.120 [184/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.120 [185/231] Linking static target drivers/librte_bus_pci.a 00:03:01.378 [186/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.378 [187/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.378 [188/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.378 [189/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.378 [190/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.378 [191/231] Linking static target drivers/librte_bus_vdev.a 00:03:01.378 [192/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.378 [193/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.636 [194/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.636 [195/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.636 [196/231] Linking static target drivers/librte_mempool_ring.a 00:03:01.636 [197/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.636 [198/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.202 [199/231] Generating kernel/freebsd/contigmem with a custom command 00:03:02.202 machine -> /usr/src/sys/amd64/include 00:03:02.202 x86 -> /usr/src/sys/x86/include 00:03:02.202 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:03:02.202 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:03:02.202 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:03:02.202 touch opt_global.h 00:03:02.202 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:03:02.202 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:03:02.202 :> export_syms 00:03:02.202 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:03:02.202 objcopy --strip-debug contigmem.ko 00:03:02.460 [200/231] Generating kernel/freebsd/nic_uio with a custom command 00:03:02.460 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:03:02.460 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:03:02.460 :> export_syms 00:03:02.460 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:03:02.460 objcopy --strip-debug nic_uio.ko 00:03:05.749 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.048 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.048 [203/231] Linking target lib/librte_eal.so.24.0 00:03:09.048 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:09.048 [205/231] Linking target lib/librte_timer.so.24.0 00:03:09.048 [206/231] Linking target lib/librte_ring.so.24.0 00:03:09.048 [207/231] Linking target lib/librte_meter.so.24.0 00:03:09.048 [208/231] Linking target lib/librte_pci.so.24.0 00:03:09.048 [209/231] Linking target lib/librte_dmadev.so.24.0 00:03:09.048 [210/231] Linking target drivers/librte_bus_vdev.so.24.0 00:03:09.305 [211/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:09.305 [212/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:09.305 [213/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:09.305 [214/231] Linking target drivers/librte_bus_pci.so.24.0 00:03:09.305 [215/231] Linking target lib/librte_rcu.so.24.0 00:03:09.305 [216/231] Linking target lib/librte_mempool.so.24.0 00:03:09.305 [217/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:09.563 [218/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:09.563 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:03:09.563 [220/231] Linking target lib/librte_mbuf.so.24.0 00:03:09.563 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:09.563 [222/231] Linking target lib/librte_compressdev.so.24.0 00:03:09.563 [223/231] Linking target lib/librte_cryptodev.so.24.0 00:03:09.563 [224/231] Linking target lib/librte_reorder.so.24.0 00:03:09.563 [225/231] Linking target lib/librte_net.so.24.0 00:03:09.822 [226/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:09.822 [227/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:09.822 [228/231] Linking target lib/librte_cmdline.so.24.0 00:03:09.822 [229/231] Linking target lib/librte_security.so.24.0 00:03:09.822 [230/231] Linking target lib/librte_hash.so.24.0 00:03:09.822 [231/231] Linking target lib/librte_ethdev.so.24.0 00:03:09.822 INFO: autodetecting backend as ninja 00:03:09.822 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:10.755 CC lib/ut/ut.o 00:03:10.755 CC lib/ut_mock/mock.o 00:03:10.755 CC lib/log/log_flags.o 00:03:10.755 CC lib/log/log.o 00:03:10.755 CC lib/log/log_deprecated.o 00:03:10.755 LIB libspdk_ut_mock.a 00:03:10.755 LIB libspdk_log.a 00:03:10.755 LIB libspdk_ut.a 00:03:11.013 CC lib/dma/dma.o 00:03:11.013 CC lib/ioat/ioat.o 00:03:11.013 CC lib/util/base64.o 00:03:11.013 CC lib/util/bit_array.o 00:03:11.013 CC lib/util/cpuset.o 00:03:11.013 CXX lib/trace_parser/trace.o 00:03:11.013 CC lib/util/crc16.o 00:03:11.013 CC lib/util/crc32.o 00:03:11.013 CC lib/util/crc32c.o 00:03:11.013 CC lib/util/crc32_ieee.o 00:03:11.013 CC lib/util/crc64.o 00:03:11.013 CC lib/util/dif.o 00:03:11.013 CC lib/util/fd.o 00:03:11.013 CC lib/util/file.o 00:03:11.013 CC lib/util/hexlify.o 00:03:11.013 CC lib/util/iov.o 00:03:11.013 CC lib/util/math.o 00:03:11.013 CC lib/util/pipe.o 00:03:11.013 LIB libspdk_ioat.a 00:03:11.013 LIB libspdk_dma.a 00:03:11.013 CC lib/util/strerror_tls.o 00:03:11.013 CC lib/util/string.o 00:03:11.271 CC lib/util/uuid.o 00:03:11.271 CC lib/util/fd_group.o 00:03:11.271 CC lib/util/xor.o 00:03:11.271 CC lib/util/zipf.o 00:03:11.271 LIB libspdk_util.a 00:03:11.529 CC lib/json/json_parse.o 00:03:11.529 CC lib/json/json_util.o 00:03:11.529 CC lib/json/json_write.o 00:03:11.529 CC lib/vmd/led.o 00:03:11.529 CC lib/vmd/vmd.o 00:03:11.529 CC lib/idxd/idxd.o 00:03:11.529 CC lib/conf/conf.o 00:03:11.529 CC lib/env_dpdk/env.o 00:03:11.529 CC lib/rdma/common.o 00:03:11.529 CC lib/idxd/idxd_user.o 00:03:11.529 CC lib/rdma/rdma_verbs.o 00:03:11.529 LIB libspdk_conf.a 00:03:11.529 CC lib/env_dpdk/memory.o 00:03:11.529 CC lib/env_dpdk/pci.o 00:03:11.529 CC lib/env_dpdk/init.o 00:03:11.529 LIB libspdk_vmd.a 00:03:11.529 LIB libspdk_json.a 00:03:11.529 CC lib/env_dpdk/threads.o 00:03:11.529 CC lib/env_dpdk/pci_ioat.o 00:03:11.529 LIB libspdk_idxd.a 00:03:11.529 LIB libspdk_rdma.a 00:03:11.529 CC lib/env_dpdk/pci_virtio.o 00:03:11.529 CC lib/env_dpdk/pci_vmd.o 00:03:11.529 CC lib/env_dpdk/pci_idxd.o 00:03:11.529 CC lib/env_dpdk/pci_event.o 00:03:11.529 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.529 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.787 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.787 CC lib/env_dpdk/sigbus_handler.o 00:03:11.787 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:11.787 CC lib/env_dpdk/pci_dpdk.o 00:03:11.787 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.787 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.787 LIB libspdk_jsonrpc.a 00:03:11.787 CC lib/rpc/rpc.o 00:03:12.045 LIB libspdk_trace_parser.a 00:03:12.045 LIB libspdk_rpc.a 00:03:12.045 LIB libspdk_env_dpdk.a 00:03:12.045 CC lib/trace/trace.o 00:03:12.045 CC lib/trace/trace_flags.o 00:03:12.045 CC lib/trace/trace_rpc.o 00:03:12.045 CC lib/notify/notify.o 00:03:12.045 CC lib/sock/sock.o 00:03:12.045 CC lib/notify/notify_rpc.o 00:03:12.045 CC lib/sock/sock_rpc.o 00:03:12.045 LIB libspdk_notify.a 00:03:12.045 LIB libspdk_trace.a 00:03:12.303 LIB libspdk_sock.a 00:03:12.303 CC lib/thread/iobuf.o 00:03:12.303 CC lib/thread/thread.o 00:03:12.304 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:12.304 CC lib/nvme/nvme_ctrlr.o 00:03:12.304 CC lib/nvme/nvme_fabric.o 00:03:12.304 CC lib/nvme/nvme_ns_cmd.o 00:03:12.304 CC lib/nvme/nvme_ns.o 00:03:12.304 CC lib/nvme/nvme_pcie_common.o 00:03:12.304 CC lib/nvme/nvme_pcie.o 00:03:12.304 CC lib/nvme/nvme_qpair.o 00:03:12.304 CC lib/nvme/nvme.o 00:03:12.562 LIB libspdk_thread.a 00:03:12.562 CC lib/nvme/nvme_quirks.o 00:03:12.821 CC lib/nvme/nvme_transport.o 00:03:12.821 CC lib/nvme/nvme_discovery.o 00:03:12.821 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:12.821 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:12.821 CC lib/accel/accel.o 00:03:12.821 CC lib/blob/blobstore.o 00:03:12.821 CC lib/init/json_config.o 00:03:12.821 CC lib/init/subsystem.o 00:03:12.821 CC lib/blob/request.o 00:03:12.821 CC lib/init/subsystem_rpc.o 00:03:12.821 CC lib/blob/zeroes.o 00:03:12.821 CC lib/init/rpc.o 00:03:12.821 CC lib/blob/blob_bs_dev.o 00:03:13.079 CC lib/accel/accel_rpc.o 00:03:13.079 CC lib/accel/accel_sw.o 00:03:13.079 CC lib/nvme/nvme_tcp.o 00:03:13.079 LIB libspdk_init.a 00:03:13.079 CC lib/nvme/nvme_opal.o 00:03:13.079 CC lib/nvme/nvme_io_msg.o 00:03:13.079 CC lib/nvme/nvme_poll_group.o 00:03:13.079 CC lib/nvme/nvme_zns.o 00:03:13.079 LIB libspdk_accel.a 00:03:13.079 CC lib/nvme/nvme_cuse.o 00:03:13.079 CC lib/event/app.o 00:03:13.079 CC lib/event/reactor.o 00:03:13.079 CC lib/event/log_rpc.o 00:03:13.079 CC lib/event/app_rpc.o 00:03:13.079 CC lib/nvme/nvme_rdma.o 00:03:13.079 CC lib/event/scheduler_static.o 00:03:13.338 LIB libspdk_blob.a 00:03:13.338 LIB libspdk_event.a 00:03:13.338 CC lib/bdev/bdev.o 00:03:13.338 CC lib/bdev/bdev_rpc.o 00:03:13.338 CC lib/bdev/bdev_zone.o 00:03:13.338 CC lib/blobfs/blobfs.o 00:03:13.338 CC lib/lvol/lvol.o 00:03:13.338 CC lib/blobfs/tree.o 00:03:13.338 CC lib/bdev/part.o 00:03:13.338 CC lib/bdev/scsi_nvme.o 00:03:13.597 LIB libspdk_blobfs.a 00:03:13.597 LIB libspdk_lvol.a 00:03:13.597 LIB libspdk_nvme.a 00:03:13.855 LIB libspdk_bdev.a 00:03:13.855 CC lib/scsi/lun.o 00:03:13.855 CC lib/scsi/dev.o 00:03:13.855 CC lib/scsi/scsi.o 00:03:13.855 CC lib/scsi/port.o 00:03:13.855 CC lib/scsi/scsi_bdev.o 00:03:13.855 CC lib/scsi/scsi_pr.o 00:03:13.855 CC lib/scsi/scsi_rpc.o 00:03:13.855 CC lib/scsi/task.o 00:03:13.855 CC lib/nvmf/ctrlr_discovery.o 00:03:13.855 CC lib/nvmf/ctrlr.o 00:03:13.855 CC lib/nvmf/ctrlr_bdev.o 00:03:13.855 CC lib/nvmf/subsystem.o 00:03:13.855 CC lib/nvmf/nvmf.o 00:03:13.855 CC lib/nvmf/nvmf_rpc.o 00:03:13.855 CC lib/nvmf/transport.o 00:03:13.855 CC lib/nvmf/tcp.o 00:03:14.114 CC lib/nvmf/rdma.o 00:03:14.115 LIB libspdk_scsi.a 00:03:14.115 CC lib/iscsi/conn.o 00:03:14.115 CC lib/iscsi/init_grp.o 00:03:14.115 CC lib/iscsi/md5.o 00:03:14.115 CC lib/iscsi/iscsi.o 00:03:14.115 CC lib/iscsi/param.o 00:03:14.115 CC lib/iscsi/portal_grp.o 00:03:14.115 CC lib/iscsi/tgt_node.o 00:03:14.115 CC lib/iscsi/iscsi_subsystem.o 00:03:14.115 CC lib/iscsi/iscsi_rpc.o 00:03:14.115 CC lib/iscsi/task.o 00:03:14.378 LIB libspdk_nvmf.a 00:03:14.378 LIB libspdk_iscsi.a 00:03:14.657 CC module/env_dpdk/env_dpdk_rpc.o 00:03:14.657 CC module/accel/ioat/accel_ioat.o 00:03:14.657 CC module/accel/dsa/accel_dsa.o 00:03:14.657 CC module/blob/bdev/blob_bdev.o 00:03:14.657 CC module/accel/ioat/accel_ioat_rpc.o 00:03:14.657 CC module/accel/iaa/accel_iaa.o 00:03:14.657 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.657 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:14.657 CC module/sock/posix/posix.o 00:03:14.657 CC module/accel/error/accel_error.o 00:03:14.657 LIB libspdk_env_dpdk_rpc.a 00:03:14.657 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.657 CC module/accel/error/accel_error_rpc.o 00:03:14.657 LIB libspdk_scheduler_dynamic.a 00:03:14.657 LIB libspdk_accel_ioat.a 00:03:14.657 LIB libspdk_accel_dsa.a 00:03:14.657 LIB libspdk_blob_bdev.a 00:03:14.657 LIB libspdk_accel_iaa.a 00:03:14.657 LIB libspdk_accel_error.a 00:03:14.916 CC module/bdev/gpt/gpt.o 00:03:14.916 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.916 CC module/bdev/delay/vbdev_delay.o 00:03:14.916 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.916 LIB libspdk_sock_posix.a 00:03:14.916 CC module/bdev/error/vbdev_error.o 00:03:14.916 CC module/bdev/nvme/bdev_nvme.o 00:03:14.916 CC module/bdev/null/bdev_null.o 00:03:14.916 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.916 CC module/bdev/malloc/bdev_malloc.o 00:03:14.916 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.916 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.916 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.916 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.916 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.916 LIB libspdk_bdev_error.a 00:03:14.916 CC module/bdev/null/bdev_null_rpc.o 00:03:14.916 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.916 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.916 CC module/bdev/raid/bdev_raid.o 00:03:14.916 LIB libspdk_blobfs_bdev.a 00:03:14.916 LIB libspdk_bdev_gpt.a 00:03:15.174 LIB libspdk_bdev_passthru.a 00:03:15.174 CC module/bdev/nvme/nvme_rpc.o 00:03:15.174 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:15.174 LIB libspdk_bdev_delay.a 00:03:15.174 CC module/bdev/nvme/bdev_mdns_client.o 00:03:15.174 LIB libspdk_bdev_null.a 00:03:15.174 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.174 CC module/bdev/split/vbdev_split.o 00:03:15.174 LIB libspdk_bdev_malloc.a 00:03:15.174 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.174 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.174 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.174 CC module/bdev/raid/raid0.o 00:03:15.174 LIB libspdk_bdev_lvol.a 00:03:15.174 CC module/bdev/aio/bdev_aio.o 00:03:15.174 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.174 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.174 CC module/bdev/raid/raid1.o 00:03:15.174 CC module/bdev/raid/concat.o 00:03:15.174 LIB libspdk_bdev_zone_block.a 00:03:15.432 LIB libspdk_bdev_nvme.a 00:03:15.432 LIB libspdk_bdev_split.a 00:03:15.432 LIB libspdk_bdev_raid.a 00:03:15.432 LIB libspdk_bdev_aio.a 00:03:15.690 CC module/event/subsystems/vmd/vmd.o 00:03:15.690 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.690 CC module/event/subsystems/sock/sock.o 00:03:15.690 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.690 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.690 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.690 LIB libspdk_event_sock.a 00:03:15.690 LIB libspdk_event_vmd.a 00:03:15.690 LIB libspdk_event_scheduler.a 00:03:15.690 LIB libspdk_event_iobuf.a 00:03:15.690 CC module/event/subsystems/accel/accel.o 00:03:15.948 LIB libspdk_event_accel.a 00:03:15.948 CC module/event/subsystems/bdev/bdev.o 00:03:16.207 LIB libspdk_event_bdev.a 00:03:16.207 CC module/event/subsystems/scsi/scsi.o 00:03:16.207 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.207 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.207 LIB libspdk_event_scsi.a 00:03:16.207 LIB libspdk_event_nvmf.a 00:03:16.467 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.467 LIB libspdk_event_iscsi.a 00:03:16.725 CC app/trace_record/trace_record.o 00:03:16.725 CC app/spdk_nvme_identify/identify.o 00:03:16.725 CXX app/trace/trace.o 00:03:16.725 CC app/spdk_lspci/spdk_lspci.o 00:03:16.725 CC app/spdk_nvme_perf/perf.o 00:03:16.725 CC app/spdk_tgt/spdk_tgt.o 00:03:16.725 CC app/nvmf_tgt/nvmf_main.o 00:03:16.725 CC app/iscsi_tgt/iscsi_tgt.o 00:03:16.725 CC examples/accel/perf/accel_perf.o 00:03:16.725 LINK spdk_trace_record 00:03:16.725 LINK spdk_lspci 00:03:16.725 CC test/accel/dif/dif.o 00:03:16.725 LINK spdk_tgt 00:03:16.983 LINK nvmf_tgt 00:03:16.983 LINK iscsi_tgt 00:03:16.983 LINK spdk_nvme_identify 00:03:16.983 LINK accel_perf 00:03:16.983 LINK spdk_nvme_perf 00:03:16.983 LINK dif 00:03:16.983 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.983 CC test/app/bdev_svc/bdev_svc.o 00:03:16.983 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.983 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.983 LINK bdev_svc 00:03:16.983 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.241 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.241 LINK hello_bdev 00:03:17.241 CC examples/blob/hello_world/hello_blob.o 00:03:17.241 CC test/bdev/bdevio/bdevio.o 00:03:17.241 LINK spdk_nvme_discover 00:03:17.242 LINK spdk_trace 00:03:17.242 CC examples/ioat/perf/perf.o 00:03:17.242 LINK nvme_fuzz 00:03:17.242 LINK hello_blob 00:03:17.242 CC app/spdk_top/spdk_top.o 00:03:17.242 LINK ioat_perf 00:03:17.242 LINK bdevperf 00:03:17.242 CC test/app/histogram_perf/histogram_perf.o 00:03:17.242 CC examples/ioat/verify/verify.o 00:03:17.242 LINK bdevio 00:03:17.242 CC examples/nvme/hello_world/hello_world.o 00:03:17.242 LINK histogram_perf 00:03:17.501 LINK verify 00:03:17.501 CC examples/blob/cli/blobcli.o 00:03:17.501 LINK hello_world 00:03:17.501 CC examples/nvme/reconnect/reconnect.o 00:03:17.501 CC test/blobfs/mkfs/mkfs.o 00:03:17.501 CC app/fio/nvme/fio_plugin.o 00:03:17.501 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:17.501 LINK spdk_top 00:03:17.501 CC examples/sock/hello_world/hello_sock.o 00:03:17.501 CC app/fio/bdev/fio_plugin.o 00:03:17.501 LINK iscsi_fuzz 00:03:17.501 TEST_HEADER include/spdk/accel.h 00:03:17.501 TEST_HEADER include/spdk/accel_module.h 00:03:17.501 TEST_HEADER include/spdk/assert.h 00:03:17.501 TEST_HEADER include/spdk/barrier.h 00:03:17.501 LINK reconnect 00:03:17.501 TEST_HEADER include/spdk/base64.h 00:03:17.501 LINK mkfs 00:03:17.501 TEST_HEADER include/spdk/bdev.h 00:03:17.501 TEST_HEADER include/spdk/bdev_module.h 00:03:17.501 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.501 TEST_HEADER include/spdk/bit_array.h 00:03:17.501 TEST_HEADER include/spdk/bit_pool.h 00:03:17.501 TEST_HEADER include/spdk/blob.h 00:03:17.501 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.501 TEST_HEADER include/spdk/blobfs.h 00:03:17.501 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.501 TEST_HEADER include/spdk/conf.h 00:03:17.501 TEST_HEADER include/spdk/config.h 00:03:17.501 TEST_HEADER include/spdk/cpuset.h 00:03:17.501 TEST_HEADER include/spdk/crc16.h 00:03:17.501 TEST_HEADER include/spdk/crc32.h 00:03:17.501 LINK blobcli 00:03:17.501 TEST_HEADER include/spdk/crc64.h 00:03:17.501 TEST_HEADER include/spdk/dif.h 00:03:17.501 TEST_HEADER include/spdk/dma.h 00:03:17.501 TEST_HEADER include/spdk/endian.h 00:03:17.501 TEST_HEADER include/spdk/env.h 00:03:17.501 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.501 TEST_HEADER include/spdk/event.h 00:03:17.501 TEST_HEADER include/spdk/fd.h 00:03:17.501 TEST_HEADER include/spdk/fd_group.h 00:03:17.501 TEST_HEADER include/spdk/file.h 00:03:17.501 TEST_HEADER include/spdk/ftl.h 00:03:17.501 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.501 TEST_HEADER include/spdk/hexlify.h 00:03:17.501 LINK hello_sock 00:03:17.501 TEST_HEADER include/spdk/histogram_data.h 00:03:17.501 TEST_HEADER include/spdk/idxd.h 00:03:17.501 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.501 TEST_HEADER include/spdk/init.h 00:03:17.501 TEST_HEADER include/spdk/ioat.h 00:03:17.501 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.501 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.501 CC test/app/jsoncat/jsoncat.o 00:03:17.501 TEST_HEADER include/spdk/json.h 00:03:17.501 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.501 TEST_HEADER include/spdk/likely.h 00:03:17.501 TEST_HEADER include/spdk/log.h 00:03:17.760 TEST_HEADER include/spdk/lvol.h 00:03:17.760 TEST_HEADER include/spdk/memory.h 00:03:17.760 TEST_HEADER include/spdk/mmio.h 00:03:17.760 TEST_HEADER include/spdk/nbd.h 00:03:17.760 TEST_HEADER include/spdk/notify.h 00:03:17.760 TEST_HEADER include/spdk/nvme.h 00:03:17.760 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.760 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.760 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.760 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.760 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:17.760 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.760 struct spdk_nvme_fdp_ruhs ruhs; 00:03:17.760 ^ 00:03:17.760 TEST_HEADER include/spdk/nvmf.h 00:03:17.760 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.760 LINK nvme_manage 00:03:17.760 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.760 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.760 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.760 TEST_HEADER include/spdk/opal.h 00:03:17.760 TEST_HEADER include/spdk/opal_spec.h 00:03:17.760 TEST_HEADER include/spdk/pci_ids.h 00:03:17.760 TEST_HEADER include/spdk/pipe.h 00:03:17.760 TEST_HEADER include/spdk/queue.h 00:03:17.760 TEST_HEADER include/spdk/reduce.h 00:03:17.760 TEST_HEADER include/spdk/rpc.h 00:03:17.760 TEST_HEADER include/spdk/scheduler.h 00:03:17.760 TEST_HEADER include/spdk/scsi.h 00:03:17.760 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.760 TEST_HEADER include/spdk/sock.h 00:03:17.760 TEST_HEADER include/spdk/stdinc.h 00:03:17.760 TEST_HEADER include/spdk/string.h 00:03:17.760 TEST_HEADER include/spdk/thread.h 00:03:17.760 TEST_HEADER include/spdk/trace.h 00:03:17.760 TEST_HEADER include/spdk/trace_parser.h 00:03:17.760 CC test/app/stub/stub.o 00:03:17.760 TEST_HEADER include/spdk/tree.h 00:03:17.760 TEST_HEADER include/spdk/ublk.h 00:03:17.760 TEST_HEADER include/spdk/util.h 00:03:17.760 TEST_HEADER include/spdk/uuid.h 00:03:17.760 TEST_HEADER include/spdk/version.h 00:03:17.760 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.760 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.760 LINK jsoncat 00:03:17.760 TEST_HEADER include/spdk/vhost.h 00:03:17.760 TEST_HEADER include/spdk/vmd.h 00:03:17.760 TEST_HEADER include/spdk/xor.h 00:03:17.760 TEST_HEADER include/spdk/zipf.h 00:03:17.760 CXX test/cpp_headers/accel.o 00:03:17.760 1 warning generated. 00:03:17.760 LINK spdk_nvme 00:03:17.760 CC examples/nvme/arbitration/arbitration.o 00:03:17.760 CC examples/nvme/hotplug/hotplug.o 00:03:17.760 LINK spdk_bdev 00:03:17.760 CC test/dma/test_dma/test_dma.o 00:03:17.760 LINK stub 00:03:17.760 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.760 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.760 LINK hotplug 00:03:17.760 CXX test/cpp_headers/accel_module.o 00:03:17.760 CC test/event/event_perf/event_perf.o 00:03:17.760 LINK arbitration 00:03:17.760 gmake[2]: Nothing to be done for 'all'. 00:03:17.760 CXX test/cpp_headers/assert.o 00:03:18.019 LINK lsvmd 00:03:18.019 CC test/env/vtophys/vtophys.o 00:03:18.019 LINK test_dma 00:03:18.019 LINK event_perf 00:03:18.019 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:18.019 LINK vtophys 00:03:18.019 CC examples/nvmf/nvmf/nvmf.o 00:03:18.019 CXX test/cpp_headers/barrier.o 00:03:18.019 CC examples/util/zipf/zipf.o 00:03:18.019 CC examples/vmd/led/led.o 00:03:18.019 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.019 CC test/event/reactor/reactor.o 00:03:18.019 CXX test/cpp_headers/base64.o 00:03:18.019 LINK cmb_copy 00:03:18.019 LINK zipf 00:03:18.019 LINK led 00:03:18.019 LINK reactor 00:03:18.278 LINK nvmf 00:03:18.278 CC examples/thread/thread/thread_ex.o 00:03:18.278 LINK env_dpdk_post_init 00:03:18.278 LINK mem_callbacks 00:03:18.278 CC test/event/reactor_perf/reactor_perf.o 00:03:18.278 CXX test/cpp_headers/bdev.o 00:03:18.278 CC examples/nvme/abort/abort.o 00:03:18.278 CC test/env/memory/memory_ut.o 00:03:18.278 CXX test/cpp_headers/bdev_module.o 00:03:18.278 CC test/env/pci/pci_ut.o 00:03:18.278 LINK reactor_perf 00:03:18.278 CXX test/cpp_headers/bdev_zone.o 00:03:18.278 LINK thread 00:03:18.278 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:18.278 CC test/nvme/aer/aer.o 00:03:18.278 LINK abort 00:03:18.278 LINK pmr_persistence 00:03:18.278 CC test/nvme/reset/reset.o 00:03:18.278 LINK pci_ut 00:03:18.537 CC test/rpc_client/rpc_client_test.o 00:03:18.537 CXX test/cpp_headers/bit_array.o 00:03:18.537 LINK aer 00:03:18.537 CC test/thread/poller_perf/poller_perf.o 00:03:18.537 CC test/thread/lock/spdk_lock.o 00:03:18.537 CC examples/idxd/perf/perf.o 00:03:18.537 LINK reset 00:03:18.537 CC test/nvme/sgl/sgl.o 00:03:18.537 LINK rpc_client_test 00:03:18.537 CXX test/cpp_headers/bit_pool.o 00:03:18.537 LINK poller_perf 00:03:18.537 CXX test/cpp_headers/blob.o 00:03:18.537 CC test/nvme/e2edp/nvme_dp.o 00:03:18.537 LINK idxd_perf 00:03:18.537 LINK sgl 00:03:18.537 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:18.537 CC test/nvme/overhead/overhead.o 00:03:18.795 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:18.795 LINK histogram_ut 00:03:18.795 CC test/nvme/err_injection/err_injection.o 00:03:18.795 LINK memory_ut 00:03:18.795 LINK nvme_dp 00:03:18.795 CXX test/cpp_headers/blob_bdev.o 00:03:18.795 LINK spdk_lock 00:03:18.795 CXX test/cpp_headers/blobfs.o 00:03:18.795 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:18.795 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:18.795 LINK overhead 00:03:18.795 LINK err_injection 00:03:18.795 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:18.795 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:18.795 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:18.795 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.795 LINK tree_ut 00:03:18.795 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:18.795 CC test/nvme/startup/startup.o 00:03:19.054 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:19.054 LINK blob_bdev_ut 00:03:19.054 LINK startup 00:03:19.054 CC test/unit/lib/event/app.c/app_ut.o 00:03:19.054 CXX test/cpp_headers/conf.o 00:03:19.054 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:19.054 LINK dma_ut 00:03:19.054 CC test/nvme/reserve/reserve.o 00:03:19.054 CXX test/cpp_headers/config.o 00:03:19.054 CXX test/cpp_headers/cpuset.o 00:03:19.054 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:19.312 LINK accel_ut 00:03:19.312 LINK app_ut 00:03:19.312 LINK blobfs_async_ut 00:03:19.312 LINK reserve 00:03:19.312 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:19.312 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:19.312 LINK blobfs_bdev_ut 00:03:19.312 CXX test/cpp_headers/crc16.o 00:03:19.312 LINK blobfs_sync_ut 00:03:19.312 CC test/nvme/simple_copy/simple_copy.o 00:03:19.312 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:19.312 LINK scsi_nvme_ut 00:03:19.312 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:19.312 CC test/nvme/connect_stress/connect_stress.o 00:03:19.312 CXX test/cpp_headers/crc32.o 00:03:19.312 LINK simple_copy 00:03:19.571 LINK ioat_ut 00:03:19.571 LINK reactor_ut 00:03:19.571 LINK connect_stress 00:03:19.571 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:19.571 CXX test/cpp_headers/crc64.o 00:03:19.571 CXX test/cpp_headers/dif.o 00:03:19.571 LINK part_ut 00:03:19.571 CC test/nvme/boot_partition/boot_partition.o 00:03:19.571 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:19.571 CXX test/cpp_headers/dma.o 00:03:19.571 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:19.571 LINK boot_partition 00:03:19.571 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:19.571 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:19.571 LINK conn_ut 00:03:19.829 LINK jsonrpc_server_ut 00:03:19.829 LINK bdev_ut 00:03:19.829 CC test/nvme/compliance/nvme_compliance.o 00:03:19.829 CXX test/cpp_headers/endian.o 00:03:19.829 LINK gpt_ut 00:03:19.829 CC test/unit/lib/log/log.c/log_ut.o 00:03:19.829 LINK init_grp_ut 00:03:19.829 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:19.829 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:19.829 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:19.829 CC test/nvme/fused_ordering/fused_ordering.o 00:03:19.829 CXX test/cpp_headers/env.o 00:03:19.829 LINK log_ut 00:03:19.829 LINK nvme_compliance 00:03:20.086 LINK fused_ordering 00:03:20.086 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:20.086 LINK json_parse_ut 00:03:20.086 LINK param_ut 00:03:20.086 CXX test/cpp_headers/env_dpdk.o 00:03:20.086 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:20.086 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.086 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:20.086 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:20.086 CXX test/cpp_headers/event.o 00:03:20.343 LINK blob_ut 00:03:20.343 LINK doorbell_aers 00:03:20.343 LINK portal_grp_ut 00:03:20.343 LINK iscsi_ut 00:03:20.343 LINK vbdev_lvol_ut 00:03:20.343 LINK notify_ut 00:03:20.343 LINK json_util_ut 00:03:20.343 CXX test/cpp_headers/fd.o 00:03:20.343 CC test/nvme/fdp/fdp.o 00:03:20.343 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:20.343 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:20.343 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:20.343 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:20.343 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:20.343 LINK lvol_ut 00:03:20.343 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:20.343 CXX test/cpp_headers/fd_group.o 00:03:20.343 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:20.343 LINK fdp 00:03:20.600 LINK tgt_node_ut 00:03:20.600 CXX test/cpp_headers/file.o 00:03:20.600 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:20.600 LINK json_write_ut 00:03:20.600 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:20.600 CXX test/cpp_headers/ftl.o 00:03:20.600 LINK nvme_ut 00:03:20.858 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:20.858 LINK dev_ut 00:03:20.858 CXX test/cpp_headers/gpt_spec.o 00:03:20.858 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:20.858 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:20.858 LINK nvme_ctrlr_cmd_ut 00:03:20.858 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:20.858 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:20.858 CXX test/cpp_headers/hexlify.o 00:03:20.858 LINK lun_ut 00:03:21.115 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:21.115 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:21.115 CXX test/cpp_headers/histogram_data.o 00:03:21.115 LINK bdev_ut 00:03:21.115 LINK scsi_ut 00:03:21.115 LINK nvme_ns_ut 00:03:21.115 LINK tcp_ut 00:03:21.115 LINK nvme_ctrlr_ut 00:03:21.115 CXX test/cpp_headers/idxd.o 00:03:21.115 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:21.115 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:21.115 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:21.115 LINK posix_ut 00:03:21.374 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:21.374 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:21.374 LINK sock_ut 00:03:21.374 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:21.374 CXX test/cpp_headers/idxd_spec.o 00:03:21.374 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:21.374 LINK bdev_raid_ut 00:03:21.374 LINK bdev_zone_ut 00:03:21.374 CXX test/cpp_headers/init.o 00:03:21.374 LINK scsi_bdev_ut 00:03:21.374 LINK ctrlr_ut 00:03:21.636 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:21.636 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:21.636 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:21.636 CXX test/cpp_headers/ioat.o 00:03:21.636 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:21.636 LINK ctrlr_discovery_ut 00:03:21.636 LINK base64_ut 00:03:21.636 LINK bdev_raid_sb_ut 00:03:21.636 CXX test/cpp_headers/ioat_spec.o 00:03:21.636 LINK subsystem_ut 00:03:21.894 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:21.894 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:21.894 LINK scsi_pr_ut 00:03:21.894 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:21.894 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:21.894 LINK thread_ut 00:03:21.894 CXX test/cpp_headers/iscsi_spec.o 00:03:21.894 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:21.894 LINK pci_event_ut 00:03:21.894 LINK nvme_ns_ocssd_cmd_ut 00:03:21.894 LINK bit_array_ut 00:03:21.894 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:21.894 LINK nvme_ns_cmd_ut 00:03:21.894 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:21.894 LINK cpuset_ut 00:03:21.894 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:21.894 CXX test/cpp_headers/json.o 00:03:21.894 LINK concat_ut 00:03:21.894 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:21.894 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:22.152 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:22.152 LINK iobuf_ut 00:03:22.152 CXX test/cpp_headers/jsonrpc.o 00:03:22.152 LINK ctrlr_bdev_ut 00:03:22.152 LINK nvme_pcie_ut 00:03:22.152 LINK raid1_ut 00:03:22.152 CXX test/cpp_headers/likely.o 00:03:22.152 LINK crc16_ut 00:03:22.152 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:22.410 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:22.410 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:22.410 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:22.410 LINK crc32_ieee_ut 00:03:22.410 LINK crc32c_ut 00:03:22.410 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:22.410 LINK nvme_poll_group_ut 00:03:22.410 CXX test/cpp_headers/log.o 00:03:22.410 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:22.410 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:22.410 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:22.410 LINK vbdev_zone_block_ut 00:03:22.410 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:22.410 CXX test/cpp_headers/lvol.o 00:03:22.410 LINK nvme_qpair_ut 00:03:22.410 LINK crc64_ut 00:03:22.667 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:22.667 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:22.667 LINK nvmf_ut 00:03:22.667 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:22.667 CXX test/cpp_headers/memory.o 00:03:22.667 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:22.667 CXX test/cpp_headers/mmio.o 00:03:22.667 LINK nvme_quirks_ut 00:03:22.667 LINK subsystem_ut 00:03:22.667 CXX test/cpp_headers/nbd.o 00:03:22.925 LINK rpc_ut 00:03:22.925 LINK dif_ut 00:03:22.925 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:22.925 CXX test/cpp_headers/notify.o 00:03:22.925 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:22.925 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:22.925 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:22.925 LINK transport_ut 00:03:22.925 LINK rdma_ut 00:03:22.925 CXX test/cpp_headers/nvme.o 00:03:22.925 LINK idxd_user_ut 00:03:22.925 LINK nvme_transport_ut 00:03:22.925 LINK rpc_ut 00:03:22.925 LINK iov_ut 00:03:22.925 CXX test/cpp_headers/nvme_intel.o 00:03:22.925 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:23.183 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.183 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:23.183 CC test/unit/lib/util/math.c/math_ut.o 00:03:23.183 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:23.183 LINK idxd_ut 00:03:23.183 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.183 LINK nvme_tcp_ut 00:03:23.183 LINK math_ut 00:03:23.183 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:23.183 CXX test/cpp_headers/nvme_spec.o 00:03:23.183 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:23.183 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:23.183 LINK bdev_nvme_ut 00:03:23.183 CC test/unit/lib/util/string.c/string_ut.o 00:03:23.183 LINK common_ut 00:03:23.183 CXX test/cpp_headers/nvme_zns.o 00:03:23.441 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:23.441 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:23.441 CXX test/cpp_headers/nvmf.o 00:03:23.441 LINK pipe_ut 00:03:23.442 LINK string_ut 00:03:23.442 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.442 LINK xor_ut 00:03:23.442 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.442 CXX test/cpp_headers/nvmf_spec.o 00:03:23.442 LINK nvme_opal_ut 00:03:23.442 CXX test/cpp_headers/nvmf_transport.o 00:03:23.442 CXX test/cpp_headers/opal.o 00:03:23.442 LINK nvme_io_msg_ut 00:03:23.442 CXX test/cpp_headers/opal_spec.o 00:03:23.442 CXX test/cpp_headers/pci_ids.o 00:03:23.442 CXX test/cpp_headers/pipe.o 00:03:23.699 CXX test/cpp_headers/queue.o 00:03:23.699 CXX test/cpp_headers/reduce.o 00:03:23.699 CXX test/cpp_headers/rpc.o 00:03:23.699 LINK nvme_pcie_common_ut 00:03:23.699 LINK nvme_fabric_ut 00:03:23.699 CXX test/cpp_headers/scheduler.o 00:03:23.699 CXX test/cpp_headers/scsi.o 00:03:23.699 CXX test/cpp_headers/scsi_spec.o 00:03:23.699 CXX test/cpp_headers/sock.o 00:03:23.699 CXX test/cpp_headers/stdinc.o 00:03:23.699 CXX test/cpp_headers/string.o 00:03:23.699 CXX test/cpp_headers/thread.o 00:03:23.699 CXX test/cpp_headers/trace.o 00:03:23.699 CXX test/cpp_headers/trace_parser.o 00:03:23.699 CXX test/cpp_headers/tree.o 00:03:23.699 CXX test/cpp_headers/ublk.o 00:03:23.699 CXX test/cpp_headers/util.o 00:03:23.699 CXX test/cpp_headers/uuid.o 00:03:23.699 CXX test/cpp_headers/version.o 00:03:23.699 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.699 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.699 CXX test/cpp_headers/vhost.o 00:03:23.699 CXX test/cpp_headers/vmd.o 00:03:23.957 CXX test/cpp_headers/xor.o 00:03:23.957 CXX test/cpp_headers/zipf.o 00:03:23.957 LINK nvme_rdma_ut 00:03:23.957 00:03:23.957 real 1m3.353s 00:03:23.957 user 4m11.798s 00:03:23.957 sys 0m55.651s 00:03:23.957 ************************************ 00:03:23.957 END TEST unittest_build 00:03:23.957 19:04:01 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:23.957 19:04:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.957 ************************************ 00:03:24.215 19:04:01 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:24.215 19:04:01 -- nvmf/common.sh@7 -- # uname -s 00:03:24.215 19:04:01 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:24.215 19:04:01 -- nvmf/common.sh@7 -- # return 0 00:03:24.215 19:04:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.215 19:04:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.215 19:04:01 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:24.215 19:04:01 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:24.215 19:04:01 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:24.215 19:04:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:24.215 19:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:24.215 19:04:01 -- spdk/autotest.sh@70 -- # create_test_list 00:03:24.215 19:04:01 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:24.215 19:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:24.215 19:04:01 -- spdk/autotest.sh@72 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:03:24.215 19:04:01 -- spdk/autotest.sh@72 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:03:24.215 19:04:01 -- spdk/autotest.sh@72 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:03:24.215 19:04:01 -- spdk/autotest.sh@73 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:03:24.215 19:04:01 -- spdk/autotest.sh@74 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:03:24.215 19:04:01 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:24.215 19:04:01 -- common/autotest_common.sh@1438 -- # uname 00:03:24.215 19:04:01 -- common/autotest_common.sh@1438 -- # '[' FreeBSD = FreeBSD ']' 00:03:24.215 19:04:01 -- common/autotest_common.sh@1439 -- # kldunload contigmem.ko 00:03:24.215 kldunload: can't find file contigmem.ko 00:03:24.215 19:04:01 -- common/autotest_common.sh@1439 -- # true 00:03:24.215 19:04:01 -- common/autotest_common.sh@1440 -- # '[' -n '' ']' 00:03:24.215 19:04:01 -- common/autotest_common.sh@1446 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:24.215 19:04:01 -- common/autotest_common.sh@1447 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:24.215 19:04:01 -- common/autotest_common.sh@1448 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:24.215 19:04:01 -- common/autotest_common.sh@1449 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:24.215 19:04:01 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:24.215 19:04:01 -- common/autotest_common.sh@1458 -- # uname 00:03:24.215 19:04:01 -- common/autotest_common.sh@1458 -- # [[ FreeBSD = FreeBSD ]] 00:03:24.215 19:04:01 -- common/autotest_common.sh@1458 -- # sysctl -n kern.ipc.maxsockbuf 00:03:24.215 19:04:01 -- common/autotest_common.sh@1458 -- # (( 2097152 < 4194304 )) 00:03:24.215 19:04:01 -- common/autotest_common.sh@1459 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:24.215 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:24.215 19:04:01 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:24.215 19:04:01 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:03:24.215 19:04:01 -- spdk/autotest.sh@83 -- # hash lcov 00:03:24.215 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 83: hash: lcov: not found 00:03:24.215 19:04:01 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:24.215 19:04:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:24.215 19:04:01 -- common/autotest_common.sh@10 -- # set +x 00:03:24.215 19:04:01 -- spdk/autotest.sh@102 -- # rm -f 00:03:24.215 19:04:01 -- spdk/autotest.sh@105 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:24.215 kldunload: can't find file contigmem.ko 00:03:24.215 kldunload: can't find file nic_uio.ko 00:03:24.215 19:04:01 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:24.215 19:04:01 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:03:24.215 19:04:01 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:03:24.215 19:04:01 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:03:24.215 19:04:01 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:24.215 19:04:01 -- spdk/autotest.sh@121 -- # grep -v p 00:03:24.215 19:04:01 -- spdk/autotest.sh@121 -- # ls /dev/nvme0ns1 00:03:24.215 19:04:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:24.215 19:04:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:24.215 19:04:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0ns1 00:03:24.215 19:04:01 -- scripts/common.sh@380 -- # local block=/dev/nvme0ns1 pt 00:03:24.216 19:04:01 -- scripts/common.sh@389 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:24.473 nvme0ns1 is not a block device 00:03:24.473 19:04:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:24.473 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 393: blkid: command not found 00:03:24.473 19:04:01 -- scripts/common.sh@393 -- # pt= 00:03:24.473 19:04:01 -- scripts/common.sh@394 -- # return 1 00:03:24.473 19:04:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:24.473 1+0 records in 00:03:24.473 1+0 records out 00:03:24.473 1048576 bytes transferred in 0.006910 secs (151739992 bytes/sec) 00:03:24.473 19:04:01 -- spdk/autotest.sh@129 -- # sync 00:03:25.039 19:04:02 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:25.039 19:04:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:25.039 19:04:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.298 19:04:02 -- spdk/autotest.sh@135 -- # uname -s 00:03:25.298 19:04:02 -- spdk/autotest.sh@135 -- # '[' FreeBSD = Linux ']' 00:03:25.298 19:04:02 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:25.556 Contigmem (not present) 00:03:25.556 Buffer Size: not set 00:03:25.556 Num Buffers: not set 00:03:25.556 00:03:25.556 00:03:25.556 Type BDF Vendor Device Driver 00:03:25.556 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:03:25.556 19:04:02 -- spdk/autotest.sh@141 -- # uname -s 00:03:25.556 19:04:02 -- spdk/autotest.sh@141 -- # [[ FreeBSD == Linux ]] 00:03:25.556 19:04:02 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:25.556 19:04:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:25.556 19:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:25.556 19:04:02 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:25.556 19:04:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:25.556 19:04:02 -- common/autotest_common.sh@10 -- # set +x 00:03:25.556 19:04:02 -- spdk/autotest.sh@150 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:25.556 kldunload: can't find file nic_uio.ko 00:03:25.556 hw.nic_uio.bdfs="0:6:0" 00:03:25.556 hw.contigmem.num_buffers="8" 00:03:25.556 hw.contigmem.buffer_size="268435456" 00:03:26.492 19:04:03 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:26.492 19:04:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:26.492 19:04:03 -- common/autotest_common.sh@10 -- # set +x 00:03:26.752 19:04:03 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:26.752 19:04:03 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:03:26.752 19:04:03 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:03:26.752 19:04:03 -- common/autotest_common.sh@1560 -- # bdfs=() 00:03:26.752 19:04:03 -- common/autotest_common.sh@1560 -- # local bdfs 00:03:26.752 19:04:03 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:26.752 19:04:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:26.752 19:04:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:26.752 19:04:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:26.752 19:04:03 -- common/autotest_common.sh@1497 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:26.752 19:04:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:26.752 19:04:03 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:26.752 19:04:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:03:26.752 19:04:03 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:03:26.752 19:04:03 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:26.752 cat: /sys/bus/pci/devices/0000:00:06.0/device: No such file or directory 00:03:26.752 19:04:03 -- common/autotest_common.sh@1563 -- # device= 00:03:26.752 19:04:03 -- common/autotest_common.sh@1563 -- # true 00:03:26.752 19:04:03 -- common/autotest_common.sh@1564 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:26.752 19:04:03 -- common/autotest_common.sh@1569 -- # printf '%s\n' 00:03:26.752 19:04:03 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:03:26.752 19:04:03 -- common/autotest_common.sh@1576 -- # return 0 00:03:26.752 19:04:03 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:03:26.752 19:04:03 -- spdk/autotest.sh@162 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:26.752 19:04:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:26.752 19:04:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:26.752 19:04:03 -- common/autotest_common.sh@10 -- # set +x 00:03:26.752 ************************************ 00:03:26.752 START TEST unittest 00:03:26.752 ************************************ 00:03:26.752 19:04:03 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:26.752 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:26.752 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.752 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.752 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:26.752 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:26.752 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:26.752 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:26.752 ++ rpc_py=rpc_cmd 00:03:26.752 ++ set -e 00:03:26.752 ++ shopt -s nullglob 00:03:26.752 ++ shopt -s extglob 00:03:26.752 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:26.752 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:26.752 +++ CONFIG_WPDK_DIR= 00:03:26.752 +++ CONFIG_ASAN=n 00:03:26.752 +++ CONFIG_VBDEV_COMPRESS=n 00:03:26.752 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:26.752 +++ CONFIG_USDT=n 00:03:26.752 +++ CONFIG_CUSTOMOCF=n 00:03:26.752 +++ CONFIG_PREFIX=/usr/local 00:03:26.752 +++ CONFIG_RBD=n 00:03:26.752 +++ CONFIG_LIBDIR= 00:03:26.752 +++ CONFIG_IDXD=y 00:03:26.752 +++ CONFIG_NVME_CUSE=n 00:03:26.752 +++ CONFIG_SMA=n 00:03:26.752 +++ CONFIG_VTUNE=n 00:03:26.752 +++ CONFIG_TSAN=n 00:03:26.752 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:26.752 +++ CONFIG_VFIO_USER_DIR= 00:03:26.752 +++ CONFIG_PGO_CAPTURE=n 00:03:26.752 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:26.752 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:26.752 +++ CONFIG_LTO=n 00:03:26.752 +++ CONFIG_ISCSI_INITIATOR=n 00:03:26.752 +++ CONFIG_CET=n 00:03:26.752 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:26.752 +++ CONFIG_OCF_PATH= 00:03:26.752 +++ CONFIG_RDMA_SET_TOS=y 00:03:26.752 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:26.752 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:26.752 +++ CONFIG_UBLK=n 00:03:26.752 +++ CONFIG_ISAL_CRYPTO=y 00:03:26.752 +++ CONFIG_OPENSSL_PATH= 00:03:26.752 +++ CONFIG_OCF=n 00:03:26.752 +++ CONFIG_FUSE=n 00:03:26.752 +++ CONFIG_VTUNE_DIR= 00:03:26.752 +++ CONFIG_FUZZER_LIB= 00:03:26.752 +++ CONFIG_FUZZER=n 00:03:26.752 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:26.752 +++ CONFIG_CRYPTO=n 00:03:26.752 +++ CONFIG_PGO_USE=n 00:03:26.752 +++ CONFIG_VHOST=n 00:03:26.752 +++ CONFIG_DAOS=n 00:03:26.752 +++ CONFIG_DPDK_INC_DIR= 00:03:26.752 +++ CONFIG_DAOS_DIR= 00:03:26.752 +++ CONFIG_UNIT_TESTS=y 00:03:26.752 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:26.752 +++ CONFIG_VIRTIO=n 00:03:26.752 +++ CONFIG_COVERAGE=n 00:03:26.752 +++ CONFIG_RDMA=y 00:03:26.752 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:26.752 +++ CONFIG_URING_PATH= 00:03:26.752 +++ CONFIG_XNVME=n 00:03:26.752 +++ CONFIG_VFIO_USER=n 00:03:26.752 +++ CONFIG_ARCH=native 00:03:26.752 +++ CONFIG_URING_ZNS=n 00:03:26.752 +++ CONFIG_WERROR=y 00:03:26.752 +++ CONFIG_HAVE_LIBBSD=n 00:03:26.752 +++ CONFIG_UBSAN=n 00:03:26.752 +++ CONFIG_IPSEC_MB_DIR= 00:03:26.752 +++ CONFIG_GOLANG=n 00:03:26.752 +++ CONFIG_ISAL=y 00:03:26.752 +++ CONFIG_IDXD_KERNEL=n 00:03:26.752 +++ CONFIG_DPDK_LIB_DIR= 00:03:26.752 +++ CONFIG_RDMA_PROV=verbs 00:03:26.752 +++ CONFIG_APPS=y 00:03:26.752 +++ CONFIG_SHARED=n 00:03:26.752 +++ CONFIG_FC_PATH= 00:03:26.752 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:26.752 +++ CONFIG_FC=n 00:03:26.752 +++ CONFIG_AVAHI=n 00:03:26.752 +++ CONFIG_FIO_PLUGIN=y 00:03:26.752 +++ CONFIG_RAID5F=n 00:03:26.752 +++ CONFIG_EXAMPLES=y 00:03:26.752 +++ CONFIG_TESTS=y 00:03:26.752 +++ CONFIG_CRYPTO_MLX5=n 00:03:26.752 +++ CONFIG_MAX_LCORES= 00:03:26.752 +++ CONFIG_IPSEC_MB=n 00:03:26.752 +++ CONFIG_DEBUG=y 00:03:26.752 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:26.752 +++ CONFIG_CROSS_PREFIX= 00:03:26.752 +++ CONFIG_URING=n 00:03:26.752 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:26.752 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:26.752 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:03:26.752 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:03:26.752 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:03:26.752 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:26.752 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:03:26.752 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:26.752 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:26.752 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:26.752 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:26.752 +++ VHOST_APP=("$_app_dir/vhost") 00:03:26.752 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:26.752 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:26.752 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:26.752 +++ [[ #ifndef SPDK_CONFIG_H 00:03:26.752 #define SPDK_CONFIG_H 00:03:26.752 #define SPDK_CONFIG_APPS 1 00:03:26.752 #define SPDK_CONFIG_ARCH native 00:03:26.752 #undef SPDK_CONFIG_ASAN 00:03:26.752 #undef SPDK_CONFIG_AVAHI 00:03:26.752 #undef SPDK_CONFIG_CET 00:03:26.752 #undef SPDK_CONFIG_COVERAGE 00:03:26.752 #define SPDK_CONFIG_CROSS_PREFIX 00:03:26.752 #undef SPDK_CONFIG_CRYPTO 00:03:26.752 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:26.752 #undef SPDK_CONFIG_CUSTOMOCF 00:03:26.752 #undef SPDK_CONFIG_DAOS 00:03:26.752 #define SPDK_CONFIG_DAOS_DIR 00:03:26.752 #define SPDK_CONFIG_DEBUG 1 00:03:26.752 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:26.752 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:26.752 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:26.752 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:26.752 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:26.752 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:26.752 #define SPDK_CONFIG_EXAMPLES 1 00:03:26.752 #undef SPDK_CONFIG_FC 00:03:26.752 #define SPDK_CONFIG_FC_PATH 00:03:26.752 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:26.752 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:26.752 #undef SPDK_CONFIG_FUSE 00:03:26.752 #undef SPDK_CONFIG_FUZZER 00:03:26.752 #define SPDK_CONFIG_FUZZER_LIB 00:03:26.752 #undef SPDK_CONFIG_GOLANG 00:03:26.752 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:26.752 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:26.752 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:26.752 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:26.752 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:26.752 #define SPDK_CONFIG_IDXD 1 00:03:26.752 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:26.752 #undef SPDK_CONFIG_IPSEC_MB 00:03:26.752 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:26.752 #define SPDK_CONFIG_ISAL 1 00:03:26.752 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:26.752 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:26.752 #define SPDK_CONFIG_LIBDIR 00:03:26.752 #undef SPDK_CONFIG_LTO 00:03:26.752 #define SPDK_CONFIG_MAX_LCORES 00:03:26.752 #undef SPDK_CONFIG_NVME_CUSE 00:03:26.752 #undef SPDK_CONFIG_OCF 00:03:26.752 #define SPDK_CONFIG_OCF_PATH 00:03:26.752 #define SPDK_CONFIG_OPENSSL_PATH 00:03:26.752 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:26.752 #undef SPDK_CONFIG_PGO_USE 00:03:26.752 #define SPDK_CONFIG_PREFIX /usr/local 00:03:26.752 #undef SPDK_CONFIG_RAID5F 00:03:26.752 #undef SPDK_CONFIG_RBD 00:03:26.752 #define SPDK_CONFIG_RDMA 1 00:03:26.752 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:26.752 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:26.752 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:26.752 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:26.752 #undef SPDK_CONFIG_SHARED 00:03:26.752 #undef SPDK_CONFIG_SMA 00:03:26.752 #define SPDK_CONFIG_TESTS 1 00:03:26.752 #undef SPDK_CONFIG_TSAN 00:03:26.752 #undef SPDK_CONFIG_UBLK 00:03:26.752 #undef SPDK_CONFIG_UBSAN 00:03:26.752 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:26.752 #undef SPDK_CONFIG_URING 00:03:26.752 #define SPDK_CONFIG_URING_PATH 00:03:26.753 #undef SPDK_CONFIG_URING_ZNS 00:03:26.753 #undef SPDK_CONFIG_USDT 00:03:26.753 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:26.753 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:26.753 #undef SPDK_CONFIG_VFIO_USER 00:03:26.753 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:26.753 #undef SPDK_CONFIG_VHOST 00:03:26.753 #undef SPDK_CONFIG_VIRTIO 00:03:26.753 #undef SPDK_CONFIG_VTUNE 00:03:26.753 #define SPDK_CONFIG_VTUNE_DIR 00:03:26.753 #define SPDK_CONFIG_WERROR 1 00:03:26.753 #define SPDK_CONFIG_WPDK_DIR 00:03:26.753 #undef SPDK_CONFIG_XNVME 00:03:26.753 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:26.753 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:26.753 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:26.753 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:26.753 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.753 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.753 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:26.753 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:26.753 ++++ export PATH 00:03:26.753 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:26.753 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:26.753 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:26.753 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:26.753 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:26.753 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:26.753 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:26.753 +++ TEST_TAG=N/A 00:03:26.753 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:26.753 ++ : 1 00:03:26.753 ++ export RUN_NIGHTLY 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_RUN_VALGRIND 00:03:26.753 ++ : 1 00:03:26.753 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:26.753 ++ : 1 00:03:26.753 ++ export SPDK_TEST_UNITTEST 00:03:26.753 ++ : 00:03:26.753 ++ export SPDK_TEST_AUTOBUILD 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_RELEASE_BUILD 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_ISAL 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_ISCSI 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:26.753 ++ : 1 00:03:26.753 ++ export SPDK_TEST_NVME 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVME_PMR 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVME_BP 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVME_CLI 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVME_CUSE 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVME_FDP 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVMF 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_VFIOUSER 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_FUZZER 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_FUZZER_SHORT 00:03:26.753 ++ : rdma 00:03:26.753 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_RBD 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_VHOST 00:03:26.753 ++ : 1 00:03:26.753 ++ export SPDK_TEST_BLOCKDEV 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_IOAT 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_BLOBFS 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_VHOST_INIT 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_LVOL 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_RUN_ASAN 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_RUN_UBSAN 00:03:26.753 ++ : 00:03:26.753 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_RUN_NON_ROOT 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_CRYPTO 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_FTL 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_OCF 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_VMD 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_OPAL 00:03:26.753 ++ : 00:03:26.753 ++ export SPDK_TEST_NATIVE_DPDK 00:03:26.753 ++ : true 00:03:26.753 ++ export SPDK_AUTOTEST_X 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_RAID5 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_URING 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_USDT 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_USE_IGB_UIO 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_SCHEDULER 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_SCANBUILD 00:03:26.753 ++ : 00:03:26.753 ++ export SPDK_TEST_NVMF_NICS 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_SMA 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_DAOS 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_XNVME 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_ACCEL_DSA 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_ACCEL_IAA 00:03:26.753 ++ : 00:03:26.753 ++ export SPDK_TEST_FUZZER_TARGET 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_TEST_NVMF_MDNS 00:03:26.753 ++ : 0 00:03:26.753 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:26.753 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:03:26.753 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:03:26.753 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:26.753 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:26.753 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:26.753 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:26.753 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:26.753 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:26.753 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:26.753 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:26.753 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:03:26.753 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:03:26.753 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:26.753 ++ PYTHONDONTWRITEBYTECODE=1 00:03:26.753 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:26.753 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:26.753 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:26.753 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:26.753 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:26.753 ++ rm -rf /var/tmp/asan_suppression_file 00:03:26.753 ++ cat 00:03:26.753 ++ echo leak:libfuse3.so 00:03:26.753 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:26.753 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:26.753 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:26.753 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:26.753 ++ '[' -z /var/spdk/dependencies ']' 00:03:26.753 ++ export DEPENDENCY_DIR 00:03:26.753 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:26.753 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:26.753 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:26.753 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:26.753 ++ export QEMU_BIN= 00:03:26.753 ++ QEMU_BIN= 00:03:26.753 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:26.753 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:26.753 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:26.753 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:26.753 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:26.753 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:26.753 ++ '[' 0 -eq 0 ']' 00:03:26.753 ++ export valgrind= 00:03:26.753 ++ valgrind= 00:03:26.753 +++ uname -s 00:03:26.753 ++ '[' FreeBSD = Linux ']' 00:03:26.753 +++ uname -s 00:03:26.753 ++ '[' FreeBSD = FreeBSD ']' 00:03:26.753 ++ MAKE=gmake 00:03:26.753 +++ sysctl -a 00:03:26.753 +++ grep -E -i hw.ncpu 00:03:26.753 +++ awk '{print $2}' 00:03:26.753 ++ MAKEFLAGS=-j10 00:03:26.753 ++ HUGEMEM=2048 00:03:26.753 ++ export HUGEMEM=2048 00:03:26.753 ++ HUGEMEM=2048 00:03:26.753 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:03:26.753 ++ NO_HUGE=() 00:03:26.753 ++ TEST_MODE= 00:03:26.753 ++ [[ -z '' ]] 00:03:26.753 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:26.753 ++ exec 00:03:26.753 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:26.753 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:26.753 ++ set_test_storage 2147483648 00:03:26.753 ++ [[ -v testdir ]] 00:03:26.753 ++ local requested_size=2147483648 00:03:26.753 ++ local mount target_dir 00:03:26.753 ++ local -A mounts fss sizes avails uses 00:03:26.753 ++ local source fs size avail mount use 00:03:26.753 ++ local storage_fallback storage_candidates 00:03:26.754 +++ mktemp -udt spdk.XXXXXX 00:03:26.754 ++ storage_fallback=/tmp/spdk.XXXXXX.KF1pOtX1 00:03:26.754 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:26.754 ++ [[ -n '' ]] 00:03:26.754 ++ [[ -n '' ]] 00:03:26.754 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.KF1pOtX1/tests/unit /tmp/spdk.XXXXXX.KF1pOtX1 00:03:26.754 ++ requested_size=2214592512 00:03:26.754 ++ read -r source fs size use avail _ mount 00:03:26.754 +++ df -T 00:03:26.754 +++ grep -v Filesystem 00:03:26.754 ++ mounts["$mount"]=/dev/gptid/0dfc2f8e-cb14-11ee-982c-001e673ab46f 00:03:26.754 ++ fss["$mount"]=ufs 00:03:26.754 ++ avails["$mount"]=17353814016 00:03:26.754 ++ sizes["$mount"]=31182712832 00:03:26.754 ++ uses["$mount"]=11334283264 00:03:26.754 ++ read -r source fs size use avail _ mount 00:03:26.754 ++ mounts["$mount"]=devfs 00:03:26.754 ++ fss["$mount"]=devfs 00:03:26.754 ++ avails["$mount"]=0 00:03:26.754 ++ sizes["$mount"]=1024 00:03:26.754 ++ uses["$mount"]=1024 00:03:26.754 ++ read -r source fs size use avail _ mount 00:03:26.754 ++ mounts["$mount"]=tmpfs 00:03:26.754 ++ fss["$mount"]=tmpfs 00:03:26.754 ++ avails["$mount"]=2147463168 00:03:26.754 ++ sizes["$mount"]=2147483648 00:03:26.754 ++ uses["$mount"]=20480 00:03:26.754 ++ read -r source fs size use avail _ mount 00:03:26.754 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output 00:03:26.754 ++ fss["$mount"]=fusefs.sshfs 00:03:26.754 ++ avails["$mount"]=96000798720 00:03:26.754 ++ sizes["$mount"]=105088212992 00:03:26.754 ++ uses["$mount"]=3701981184 00:03:26.754 ++ read -r source fs size use avail _ mount 00:03:26.754 ++ printf '* Looking for test storage...\n' 00:03:26.754 * Looking for test storage... 00:03:26.754 ++ local target_space new_size 00:03:26.754 ++ for target_dir in "${storage_candidates[@]}" 00:03:26.754 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.754 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:26.754 ++ mount=/ 00:03:26.754 ++ target_space=17353814016 00:03:26.754 ++ (( target_space == 0 || target_space < requested_size )) 00:03:26.754 ++ (( target_space >= requested_size )) 00:03:26.754 ++ [[ ufs == tmpfs ]] 00:03:26.754 ++ [[ ufs == ramfs ]] 00:03:26.754 ++ [[ / == / ]] 00:03:26.754 ++ new_size=13548875776 00:03:26.754 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:26.754 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.754 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.754 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.754 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:26.754 ++ return 0 00:03:26.754 ++ set -o errtrace 00:03:26.754 ++ shopt -s extdebug 00:03:26.754 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:26.754 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:26.754 19:04:04 -- common/autotest_common.sh@1670 -- # true 00:03:26.754 19:04:04 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:03:26.754 19:04:04 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:26.754 19:04:04 -- common/autotest_common.sh@29 -- # exec 00:03:26.754 19:04:04 -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:26.754 19:04:04 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:26.754 19:04:04 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:26.754 19:04:04 -- common/autotest_common.sh@18 -- # set -x 00:03:26.754 19:04:04 -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:03:26.754 19:04:04 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:03:26.754 19:04:04 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:03:26.754 19:04:04 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:03:26.754 19:04:04 -- unit/unittest.sh@178 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:26.754 19:04:04 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=clang 00:03:26.754 19:04:04 -- unit/unittest.sh@179 -- # hash lcov 00:03:26.754 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 179: hash: lcov: not found 00:03:26.754 19:04:04 -- unit/unittest.sh@182 -- # cov_avail=no 00:03:26.754 19:04:04 -- unit/unittest.sh@184 -- # '[' no = yes ']' 00:03:26.754 19:04:04 -- unit/unittest.sh@206 -- # uname -m 00:03:27.014 19:04:04 -- unit/unittest.sh@206 -- # '[' amd64 = aarch64 ']' 00:03:27.014 19:04:04 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:27.014 19:04:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:27.014 19:04:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:27.014 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.014 ************************************ 00:03:27.014 START TEST unittest_pci_event 00:03:27.014 ************************************ 00:03:27.014 19:04:04 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:27.014 00:03:27.014 00:03:27.014 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.014 http://cunit.sourceforge.net/ 00:03:27.014 00:03:27.014 00:03:27.014 Suite: pci_event 00:03:27.014 Test: test_pci_parse_event ...passed 00:03:27.014 00:03:27.014 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.014 suites 1 1 n/a 0 0 00:03:27.014 tests 1 1 1 0 0 00:03:27.014 asserts 1 1 1 0 n/a 00:03:27.014 00:03:27.014 Elapsed time = 0.000 seconds 00:03:27.014 00:03:27.014 real 0m0.025s 00:03:27.014 user 0m0.001s 00:03:27.014 sys 0m0.009s 00:03:27.014 19:04:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.014 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.014 ************************************ 00:03:27.014 END TEST unittest_pci_event 00:03:27.014 ************************************ 00:03:27.014 19:04:04 -- unit/unittest.sh@211 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:27.014 19:04:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:27.014 19:04:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:27.014 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.014 ************************************ 00:03:27.014 START TEST unittest_include 00:03:27.014 ************************************ 00:03:27.014 19:04:04 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:27.014 00:03:27.014 00:03:27.014 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.014 http://cunit.sourceforge.net/ 00:03:27.014 00:03:27.014 00:03:27.014 Suite: histogram 00:03:27.014 Test: histogram_test ...passed 00:03:27.014 Test: histogram_merge ...passed 00:03:27.014 00:03:27.014 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.014 suites 1 1 n/a 0 0 00:03:27.015 tests 2 2 2 0 0 00:03:27.015 asserts 50 50 50 0 n/a 00:03:27.015 00:03:27.015 Elapsed time = 0.000 seconds 00:03:27.015 00:03:27.015 real 0m0.008s 00:03:27.015 user 0m0.001s 00:03:27.015 sys 0m0.007s 00:03:27.015 19:04:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:27.015 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.015 ************************************ 00:03:27.015 END TEST unittest_include 00:03:27.015 ************************************ 00:03:27.015 19:04:04 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:03:27.015 19:04:04 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:27.015 19:04:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:27.015 19:04:04 -- common/autotest_common.sh@10 -- # set +x 00:03:27.015 ************************************ 00:03:27.015 START TEST unittest_bdev 00:03:27.015 ************************************ 00:03:27.015 19:04:04 -- common/autotest_common.sh@1102 -- # unittest_bdev 00:03:27.015 19:04:04 -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:27.015 00:03:27.015 00:03:27.015 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.015 http://cunit.sourceforge.net/ 00:03:27.015 00:03:27.015 00:03:27.015 Suite: bdev 00:03:27.015 Test: bytes_to_blocks_test ...passed 00:03:27.015 Test: num_blocks_test ...passed 00:03:27.015 Test: io_valid_test ...passed 00:03:27.015 Test: open_write_test ...[2024-02-14 19:04:04.304843] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.305115] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.305128] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:27.015 passed 00:03:27.015 Test: claim_test ...passed 00:03:27.015 Test: alias_add_del_test ...[2024-02-14 19:04:04.307224] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:27.015 [2024-02-14 19:04:04.307238] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:27.015 [2024-02-14 19:04:04.307247] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:27.015 passed 00:03:27.015 Test: get_device_stat_test ...passed 00:03:27.015 Test: bdev_io_types_test ...passed 00:03:27.015 Test: bdev_io_wait_test ...passed 00:03:27.015 Test: bdev_io_spans_split_test ...passed 00:03:27.015 Test: bdev_io_boundary_split_test ...passed 00:03:27.015 Test: bdev_io_max_size_and_segment_split_test ...[2024-02-14 19:04:04.312597] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:27.015 passed 00:03:27.015 Test: bdev_io_mix_split_test ...passed 00:03:27.015 Test: bdev_io_split_with_io_wait ...passed 00:03:27.015 Test: bdev_io_write_unit_split_test ...[2024-02-14 19:04:04.315497] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:27.015 [2024-02-14 19:04:04.315530] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:27.015 [2024-02-14 19:04:04.315539] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:27.015 [2024-02-14 19:04:04.315549] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:27.015 passed 00:03:27.015 Test: bdev_io_alignment_with_boundary ...passed 00:03:27.015 Test: bdev_io_alignment ...passed 00:03:27.015 Test: bdev_histograms ...passed 00:03:27.015 Test: bdev_write_zeroes ...passed 00:03:27.015 Test: bdev_compare_and_write ...passed 00:03:27.015 Test: bdev_compare ...passed 00:03:27.015 Test: bdev_compare_emulated ...passed 00:03:27.015 Test: bdev_zcopy_write ...passed 00:03:27.015 Test: bdev_zcopy_read ...passed 00:03:27.015 Test: bdev_open_while_hotremove ...passed 00:03:27.015 Test: bdev_close_while_hotremove ...passed 00:03:27.015 Test: bdev_open_ext_test ...passed 00:03:27.015 Test: bdev_open_ext_unregister ...passed 00:03:27.015 Test: bdev_set_io_timeout ...[2024-02-14 19:04:04.326936] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:27.015 [2024-02-14 19:04:04.326984] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:27.015 passed 00:03:27.015 Test: bdev_set_qd_sampling ...passed 00:03:27.015 Test: lba_range_overlap ...passed 00:03:27.015 Test: lock_lba_range_check_ranges ...passed 00:03:27.015 Test: lock_lba_range_with_io_outstanding ...passed 00:03:27.015 Test: lock_lba_range_overlapped ...passed 00:03:27.015 Test: bdev_quiesce ...[2024-02-14 19:04:04.332477] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:27.015 passed 00:03:27.015 Test: bdev_io_abort ...passed 00:03:27.015 Test: bdev_unmap ...passed 00:03:27.015 Test: bdev_write_zeroes_split_test ...passed 00:03:27.015 Test: bdev_set_options_test ...passed 00:03:27.015 Test: bdev_get_memory_domains ...passed 00:03:27.015 Test: bdev_io_ext ...[2024-02-14 19:04:04.335649] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:27.015 passed 00:03:27.015 Test: bdev_io_ext_no_opts ...passed 00:03:27.015 Test: bdev_io_ext_invalid_opts ...passed 00:03:27.015 Test: bdev_io_ext_split ...passed 00:03:27.015 Test: bdev_io_ext_bounce_buffer ...passed 00:03:27.015 Test: bdev_register_uuid_alias ...[2024-02-14 19:04:04.341114] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name d230b68a-cb6b-11ee-af6b-4feeebbbadda already exists 00:03:27.015 [2024-02-14 19:04:04.341134] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:d230b68a-cb6b-11ee-af6b-4feeebbbadda alias for bdev bdev0 00:03:27.015 passed 00:03:27.015 Test: bdev_unregister_by_name ...passed 00:03:27.015 Test: for_each_bdev_test ...[2024-02-14 19:04:04.341372] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:27.015 [2024-02-14 19:04:04.341381] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7840:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:27.015 passed 00:03:27.015 Test: bdev_seek_test ...passed 00:03:27.015 Test: bdev_copy ...passed 00:03:27.015 Test: bdev_copy_split_test ...passed 00:03:27.015 Test: examine_locks ...passed 00:03:27.015 Test: claim_v2_rwo ...passed 00:03:27.015 Test: claim_v2_rom ...[2024-02-14 19:04:04.344345] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344359] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344366] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344374] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344381] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344390] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8561:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:27.015 passed 00:03:27.015 Test: claim_v2_rwm ...passed 00:03:27.015 Test: claim_v2_existing_writer ...passed 00:03:27.015 Test: claim_v2_existing_v1 ...[2024-02-14 19:04:04.344409] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344417] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344424] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344431] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344440] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:27.015 [2024-02-14 19:04:04.344447] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:27.015 [2024-02-14 19:04:04.344462] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8634:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:27.015 [2024-02-14 19:04:04.344471] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344478] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344484] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344490] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344497] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344511] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8634:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:27.015 [2024-02-14 19:04:04.344528] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:27.015 [2024-02-14 19:04:04.344535] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:27.015 [2024-02-14 19:04:04.344550] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:27.015 [2024-02-14 19:04:04.344557] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:27.015 passed 00:03:27.015 Test: claim_v1_existing_v2 ...passed 00:03:27.015 Test: examine_claimed ...passed 00:03:27.015 00:03:27.015 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.015 suites 1 1 n/a 0 0 00:03:27.016 tests 59 59 59 0 0 00:03:27.016 asserts 4599 4599 4599 0 n/a 00:03:27.016 00:03:27.016 Elapsed time = 0.039 seconds 00:03:27.016 [2024-02-14 19:04:04.344564] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:27.016 [2024-02-14 19:04:04.344597] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:27.016 [2024-02-14 19:04:04.344605] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:27.016 [2024-02-14 19:04:04.344613] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:27.016 [2024-02-14 19:04:04.344641] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:27.016 19:04:04 -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:27.016 00:03:27.016 00:03:27.016 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.016 http://cunit.sourceforge.net/ 00:03:27.016 00:03:27.016 00:03:27.016 Suite: nvme 00:03:27.016 Test: test_create_ctrlr ...passed 00:03:27.016 Test: test_reset_ctrlr ...[2024-02-14 19:04:04.354644] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 passed 00:03:27.016 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:27.016 Test: test_failover_ctrlr ...passed 00:03:27.016 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:03:27.016 Test: test_pending_reset ...[2024-02-14 19:04:04.355060] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.355089] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.355110] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 passed 00:03:27.016 Test: test_attach_ctrlr ...[2024-02-14 19:04:04.355246] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.355277] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 passed 00:03:27.016 Test: test_aer_cb ...passed 00:03:27.016 Test: test_submit_nvme_cmd ...[2024-02-14 19:04:04.355368] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:27.016 passed 00:03:27.016 Test: test_add_remove_trid ...passed 00:03:27.016 Test: test_abort ...[2024-02-14 19:04:04.355707] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7172:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:27.016 passed 00:03:27.016 Test: test_get_io_qpair ...passed 00:03:27.016 Test: test_bdev_unregister ...passed 00:03:27.016 Test: test_compare_ns ...passed 00:03:27.016 Test: test_init_ana_log_page ...passed 00:03:27.016 Test: test_get_memory_domains ...passed 00:03:27.016 Test: test_reconnect_qpair ...passed 00:03:27.016 Test: test_create_bdev_ctrlr ...[2024-02-14 19:04:04.355979] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.356043] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5220:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:27.016 passed 00:03:27.016 Test: test_add_multi_ns_to_bdev ...passed 00:03:27.016 Test: test_add_multi_io_paths_to_nbdev_ch ...[2024-02-14 19:04:04.356169] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4439:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:27.016 passed 00:03:27.016 Test: test_admin_path ...passed 00:03:27.016 Test: test_reset_bdev_ctrlr ...passed 00:03:27.016 Test: test_find_io_path ...passed 00:03:27.016 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:27.016 Test: test_retry_io_for_io_path_error ...passed 00:03:27.016 Test: test_retry_io_count ...passed 00:03:27.016 Test: test_concurrent_read_ana_log_page ...passed 00:03:27.016 Test: test_retry_io_for_ana_error ...passed 00:03:27.016 Test: test_check_io_error_resiliency_params ...passed 00:03:27.016 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-02-14 19:04:04.356722] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5877:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:27.016 [2024-02-14 19:04:04.356739] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5881:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:27.016 [2024-02-14 19:04:04.356751] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5890:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:27.016 [2024-02-14 19:04:04.356762] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5893:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:27.016 [2024-02-14 19:04:04.356773] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5905:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:27.016 [2024-02-14 19:04:04.356784] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5905:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:27.016 [2024-02-14 19:04:04.356795] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5885:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:27.016 [2024-02-14 19:04:04.356806] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5900:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:27.016 [2024-02-14 19:04:04.356816] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5897:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:27.016 passed 00:03:27.016 Test: test_reconnect_ctrlr ...[2024-02-14 19:04:04.356915] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.356940] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.356979] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.356997] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357014] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 passed 00:03:27.016 Test: test_retry_failover_ctrlr ...passed 00:03:27.016 Test: test_fail_path ...passed 00:03:27.016 Test: test_nvme_ns_cmp ...passed 00:03:27.016 Test: test_ana_transition ...passed 00:03:27.016 Test: test_set_preferred_path ...passed 00:03:27.016 Test: test_find_next_io_path ...passed 00:03:27.016 Test: test_find_io_path_min_qd ...passed 00:03:27.016 Test: test_disable_auto_failback ...[2024-02-14 19:04:04.357064] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357122] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357143] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357160] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357177] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357193] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.357362] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 passed 00:03:27.016 Test: test_set_multipath_policy ...passed 00:03:27.016 Test: test_uuid_generation ...passed 00:03:27.016 Test: test_retry_io_to_same_path ...passed 00:03:27.016 Test: test_race_between_reset_and_disconnected ...passed 00:03:27.016 Test: test_ctrlr_op_rpc ...passed 00:03:27.016 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:27.016 Test: test_disable_enable_ctrlr ...passed 00:03:27.016 Test: test_delete_ctrlr_done ...[2024-02-14 19:04:04.393336] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 [2024-02-14 19:04:04.393394] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:27.016 passed 00:03:27.016 00:03:27.016 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.016 suites 1 1 n/a 0 0 00:03:27.016 tests 47 47 47 0 0 00:03:27.016 asserts 3527 3527 3527 0 n/a 00:03:27.016 00:03:27.016 Elapsed time = 0.016 seconds 00:03:27.016 19:04:04 -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:27.016 Test Options 00:03:27.016 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:27.016 00:03:27.016 00:03:27.016 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.016 http://cunit.sourceforge.net/ 00:03:27.016 00:03:27.016 00:03:27.016 Suite: raid 00:03:27.016 Test: test_create_raid ...passed 00:03:27.016 Test: test_create_raid_superblock ...passed 00:03:27.016 Test: test_delete_raid ...passed 00:03:27.016 Test: test_create_raid_invalid_args ...[2024-02-14 19:04:04.406703] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:27.016 [2024-02-14 19:04:04.407049] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:27.016 [2024-02-14 19:04:04.407140] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:27.016 [2024-02-14 19:04:04.407181] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:27.016 [2024-02-14 19:04:04.407334] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:27.016 passed 00:03:27.017 Test: test_delete_raid_invalid_args ...passed 00:03:27.017 Test: test_io_channel ...passed 00:03:27.017 Test: test_reset_io ...passed 00:03:27.017 Test: test_write_io ...passed 00:03:27.017 Test: test_read_io ...passed 00:03:28.391 Test: test_unmap_io ...passed 00:03:28.391 Test: test_io_failure ...passed 00:03:28.391 Test: test_multi_raid_no_io ...[2024-02-14 19:04:05.751306] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:03:28.391 passed 00:03:28.391 Test: test_multi_raid_with_io ...passed 00:03:28.391 Test: test_io_type_supported ...passed 00:03:28.391 Test: test_raid_json_dump_info ...passed 00:03:28.391 Test: test_context_size ...passed 00:03:28.391 Test: test_raid_level_conversions ...passed 00:03:28.391 Test: test_raid_process ...passed 00:03:28.391 Test: test_raid_io_split ...passed 00:03:28.391 00:03:28.391 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.391 suites 1 1 n/a 0 0 00:03:28.391 tests 19 19 19 0 0 00:03:28.391 asserts 177879 177879 177879 0 n/a 00:03:28.391 00:03:28.391 Elapsed time = 1.344 seconds 00:03:28.391 19:04:05 -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:28.391 00:03:28.391 00:03:28.391 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.391 http://cunit.sourceforge.net/ 00:03:28.391 00:03:28.391 00:03:28.391 Suite: raid_sb 00:03:28.391 Test: test_raid_bdev_write_superblock ...passed 00:03:28.391 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:28.391 Test: test_raid_bdev_parse_superblock ...[2024-02-14 19:04:05.766640] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 121:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:28.391 passed 00:03:28.391 00:03:28.391 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.391 suites 1 1 n/a 0 0 00:03:28.391 tests 3 3 3 0 0 00:03:28.391 asserts 32 32 32 0 n/a 00:03:28.391 00:03:28.391 Elapsed time = 0.000 seconds 00:03:28.391 19:04:05 -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:28.391 00:03:28.391 00:03:28.391 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.391 http://cunit.sourceforge.net/ 00:03:28.391 00:03:28.391 00:03:28.391 Suite: concat 00:03:28.391 Test: test_concat_start ...passed 00:03:28.391 Test: test_concat_rw ...passed 00:03:28.391 Test: test_concat_null_payload ...passed 00:03:28.391 00:03:28.391 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.391 suites 1 1 n/a 0 0 00:03:28.391 tests 3 3 3 0 0 00:03:28.391 asserts 8097 8097 8097 0 n/a 00:03:28.391 00:03:28.391 Elapsed time = 0.000 seconds 00:03:28.391 19:04:05 -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:28.391 00:03:28.391 00:03:28.391 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.391 http://cunit.sourceforge.net/ 00:03:28.391 00:03:28.391 00:03:28.391 Suite: raid1 00:03:28.391 Test: test_raid1_start ...passed 00:03:28.391 Test: test_raid1_read_balancing ...passed 00:03:28.391 00:03:28.392 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.392 suites 1 1 n/a 0 0 00:03:28.392 tests 2 2 2 0 0 00:03:28.392 asserts 2856 2856 2856 0 n/a 00:03:28.392 00:03:28.392 Elapsed time = 0.000 seconds 00:03:28.392 19:04:05 -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:28.392 00:03:28.392 00:03:28.392 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.392 http://cunit.sourceforge.net/ 00:03:28.392 00:03:28.392 00:03:28.392 Suite: zone 00:03:28.392 Test: test_zone_get_operation ...passed 00:03:28.392 Test: test_bdev_zone_get_info ...passed 00:03:28.392 Test: test_bdev_zone_management ...passed 00:03:28.392 Test: test_bdev_zone_append ...passed 00:03:28.392 Test: test_bdev_zone_append_with_md ...passed 00:03:28.392 Test: test_bdev_zone_appendv ...passed 00:03:28.392 Test: test_bdev_zone_appendv_with_md ...passed 00:03:28.392 Test: test_bdev_io_get_append_location ...passed 00:03:28.392 00:03:28.392 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.392 suites 1 1 n/a 0 0 00:03:28.392 tests 8 8 8 0 0 00:03:28.392 asserts 94 94 94 0 n/a 00:03:28.392 00:03:28.392 Elapsed time = 0.000 seconds 00:03:28.392 19:04:05 -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:28.392 00:03:28.392 00:03:28.392 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.392 http://cunit.sourceforge.net/ 00:03:28.392 00:03:28.392 00:03:28.392 Suite: gpt_parse 00:03:28.392 Test: test_parse_mbr_and_primary ...[2024-02-14 19:04:05.796041] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:28.392 [2024-02-14 19:04:05.796399] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:28.392 [2024-02-14 19:04:05.796450] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:28.392 [2024-02-14 19:04:05.796469] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:28.392 [2024-02-14 19:04:05.796489] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:28.392 [2024-02-14 19:04:05.796506] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:28.392 passed 00:03:28.392 Test: test_parse_secondary ...passed 00:03:28.392 Test: test_check_mbr ...passed 00:03:28.392 Test: test_read_header ...[2024-02-14 19:04:05.796732] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:28.392 [2024-02-14 19:04:05.796749] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:28.392 [2024-02-14 19:04:05.796767] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:28.392 [2024-02-14 19:04:05.796782] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:28.392 [2024-02-14 19:04:05.797003] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:28.392 [2024-02-14 19:04:05.797019] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:28.392 [2024-02-14 19:04:05.797043] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:28.392 passed 00:03:28.392 Test: test_read_partitions ...[2024-02-14 19:04:05.797061] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:28.392 [2024-02-14 19:04:05.797078] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:28.392 [2024-02-14 19:04:05.797096] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:28.392 [2024-02-14 19:04:05.797114] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:28.392 [2024-02-14 19:04:05.797129] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:28.392 [2024-02-14 19:04:05.797166] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:28.392 [2024-02-14 19:04:05.797195] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:28.392 [2024-02-14 19:04:05.797218] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:28.392 [2024-02-14 19:04:05.797240] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:28.392 [2024-02-14 19:04:05.797373] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:28.392 passed 00:03:28.392 00:03:28.392 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.392 suites 1 1 n/a 0 0 00:03:28.392 tests 5 5 5 0 0 00:03:28.392 asserts 33 33 33 0 n/a 00:03:28.392 00:03:28.392 Elapsed time = 0.000 seconds 00:03:28.392 19:04:05 -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:28.392 00:03:28.392 00:03:28.392 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.392 http://cunit.sourceforge.net/ 00:03:28.392 00:03:28.392 00:03:28.392 Suite: bdev_part 00:03:28.651 Test: part_test ...[2024-02-14 19:04:05.807651] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:03:28.651 passed 00:03:28.651 Test: part_free_test ...passed 00:03:28.651 Test: part_get_io_channel_test ...passed 00:03:28.651 Test: part_construct_ext ...passed 00:03:28.651 00:03:28.651 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.651 suites 1 1 n/a 0 0 00:03:28.651 tests 4 4 4 0 0 00:03:28.651 asserts 48 48 48 0 n/a 00:03:28.651 00:03:28.651 Elapsed time = 0.000 seconds 00:03:28.651 19:04:05 -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:28.651 00:03:28.651 00:03:28.651 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.651 http://cunit.sourceforge.net/ 00:03:28.651 00:03:28.651 00:03:28.651 Suite: scsi_nvme_suite 00:03:28.651 Test: scsi_nvme_translate_test ...passed 00:03:28.651 00:03:28.651 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.651 suites 1 1 n/a 0 0 00:03:28.651 tests 1 1 1 0 0 00:03:28.651 asserts 104 104 104 0 n/a 00:03:28.651 00:03:28.651 Elapsed time = 0.000 seconds 00:03:28.651 19:04:05 -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:28.651 00:03:28.651 00:03:28.651 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.651 http://cunit.sourceforge.net/ 00:03:28.651 00:03:28.651 00:03:28.651 Suite: lvol 00:03:28.651 Test: ut_lvs_init ...passed 00:03:28.651 Test: ut_lvol_init ...passed 00:03:28.651 Test: ut_lvol_snapshot ...passed 00:03:28.651 Test: ut_lvol_clone ...passed 00:03:28.651 Test: ut_lvs_destroy ...passed 00:03:28.651 Test: ut_lvs_unload ...passed 00:03:28.651 Test: ut_lvol_resize ...passed 00:03:28.651 Test: ut_lvol_set_read_only ...passed 00:03:28.651 Test: ut_lvol_hotremove ...passed 00:03:28.651 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:28.651 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:28.651 Test: ut_lvol_read_write ...passed 00:03:28.651 Test: ut_vbdev_lvol_submit_request ...passed 00:03:28.651 Test: ut_lvol_examine_config ...passed 00:03:28.651 Test: ut_lvol_examine_disk ...passed 00:03:28.651 Test: ut_lvol_rename ...passed 00:03:28.651 Test: ut_bdev_finish ...passed 00:03:28.651 Test: ut_lvs_rename ...passed 00:03:28.651 Test: ut_lvol_seek ...passed 00:03:28.651 Test: ut_esnap_dev_create ...[2024-02-14 19:04:05.826319] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:28.651 [2024-02-14 19:04:05.826521] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:28.652 [2024-02-14 19:04:05.826596] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:28.652 [2024-02-14 19:04:05.826670] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:28.652 [2024-02-14 19:04:05.826704] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:28.652 [2024-02-14 19:04:05.826714] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:28.652 [2024-02-14 19:04:05.826752] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:28.652 [2024-02-14 19:04:05.826779] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:28.652 [2024-02-14 19:04:05.826788] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:28.652 passed 00:03:28.652 Test: ut_lvol_esnap_clone_bad_args ...passed 00:03:28.652 00:03:28.652 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.652 suites 1 1 n/a 0 0 00:03:28.652 tests 21 21 21 0 0 00:03:28.652 asserts 712 712 712 0 n/a 00:03:28.652 00:03:28.652 Elapsed time = 0.000 seconds 00:03:28.652 [2024-02-14 19:04:05.826805] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1901:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:03:28.652 [2024-02-14 19:04:05.826824] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:28.652 [2024-02-14 19:04:05.826834] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:03:28.652 19:04:05 -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:28.652 00:03:28.652 00:03:28.652 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.652 http://cunit.sourceforge.net/ 00:03:28.652 00:03:28.652 00:03:28.652 Suite: zone_block 00:03:28.652 Test: test_zone_block_create ...passed 00:03:28.652 Test: test_zone_block_create_invalid ...passed 00:03:28.652 Test: test_get_zone_info ...passed 00:03:28.652 Test: test_supported_io_types ...passed 00:03:28.652 Test: test_reset_zone ...[2024-02-14 19:04:05.838774] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:28.652 [2024-02-14 19:04:05.839005] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-02-14 19:04:05.839024] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:28.652 [2024-02-14 19:04:05.839037] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-02-14 19:04:05.839049] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:28.652 [2024-02-14 19:04:05.839060] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-02-14 19:04:05.839070] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:28.652 [2024-02-14 19:04:05.839080] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-02-14 19:04:05.839141] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839159] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839171] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 passed 00:03:28.652 Test: test_open_zone ...[2024-02-14 19:04:05.839224] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839237] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839269] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 passed 00:03:28.652 Test: test_zone_write ...[2024-02-14 19:04:05.839508] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839521] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839558] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:28.652 [2024-02-14 19:04:05.839569] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.839582] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:28.652 [2024-02-14 19:04:05.839591] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840105] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:28.652 [2024-02-14 19:04:05.840126] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840139] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:28.652 [2024-02-14 19:04:05.840149] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 passed 00:03:28.652 Test: test_zone_read ...[2024-02-14 19:04:05.840724] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:28.652 [2024-02-14 19:04:05.840740] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840774] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:28.652 [2024-02-14 19:04:05.840784] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840798] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:28.652 [2024-02-14 19:04:05.840807] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 passed 00:03:28.652 Test: test_close_zone ...passed 00:03:28.652 Test: test_finish_zone ...[2024-02-14 19:04:05.840857] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:28.652 [2024-02-14 19:04:05.840867] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840894] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840909] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840947] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.840960] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.841015] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.841028] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 passed 00:03:28.652 Test: test_append_zone ...[2024-02-14 19:04:05.841057] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:28.652 [2024-02-14 19:04:05.841067] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 [2024-02-14 19:04:05.841079] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:28.652 [2024-02-14 19:04:05.841088] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 passed 00:03:28.652 00:03:28.652 [2024-02-14 19:04:05.842195] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:28.652 [2024-02-14 19:04:05.842216] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:28.652 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.652 suites 1 1 n/a 0 0 00:03:28.652 tests 11 11 11 0 0 00:03:28.652 asserts 3437 3437 3437 0 n/a 00:03:28.652 00:03:28.652 Elapsed time = 0.008 seconds 00:03:28.652 19:04:05 -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:28.652 00:03:28.652 00:03:28.652 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.652 http://cunit.sourceforge.net/ 00:03:28.652 00:03:28.652 00:03:28.652 Suite: bdev 00:03:28.652 Test: basic ...[2024-02-14 19:04:05.850306] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x2487d9): Operation not permitted (rc=-1) 00:03:28.652 [2024-02-14 19:04:05.850499] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82bda5480 (0x2487d0): Operation not permitted (rc=-1) 00:03:28.652 [2024-02-14 19:04:05.850511] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x2487d9): Operation not permitted (rc=-1) 00:03:28.652 passed 00:03:28.652 Test: unregister_and_close ...passed 00:03:28.652 Test: unregister_and_close_different_threads ...passed 00:03:28.652 Test: basic_qos ...passed 00:03:28.653 Test: put_channel_during_reset ...passed 00:03:28.653 Test: aborted_reset ...passed 00:03:28.653 Test: aborted_reset_no_outstanding_io ...passed 00:03:28.653 Test: io_during_reset ...passed 00:03:28.653 Test: reset_completions ...passed 00:03:28.653 Test: io_during_qos_queue ...passed 00:03:28.653 Test: io_during_qos_reset ...passed 00:03:28.653 Test: enomem ...passed 00:03:28.653 Test: enomem_multi_bdev ...passed 00:03:28.653 Test: enomem_multi_bdev_unregister ...passed 00:03:28.653 Test: enomem_multi_io_target ...passed 00:03:28.653 Test: qos_dynamic_enable ...passed 00:03:28.653 Test: bdev_histograms_mt ...passed 00:03:28.653 Test: bdev_set_io_timeout_mt ...[2024-02-14 19:04:05.877860] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x82bda5600 not unregistered 00:03:28.653 passed 00:03:28.653 Test: lock_lba_range_then_submit_io ...[2024-02-14 19:04:05.878687] thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x2487b8 already registered (old:0x82bda5600 new:0x82bda5780) 00:03:28.653 passed 00:03:28.653 Test: unregister_during_reset ...passed 00:03:28.653 Test: event_notify_and_close ...passed 00:03:28.653 Suite: bdev_wrong_thread 00:03:28.653 Test: spdk_bdev_register_wt ...passed 00:03:28.653 Test: spdk_bdev_examine_wt ...passed[2024-02-14 19:04:05.881960] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8360:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x82bd6e700 (0x82bd6e700) 00:03:28.653 [2024-02-14 19:04:05.881999] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 794:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82bd6e700 (0x82bd6e700) 00:03:28.653 00:03:28.653 00:03:28.653 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.653 suites 2 2 n/a 0 0 00:03:28.653 tests 23 23 23 0 0 00:03:28.653 asserts 601 601 601 0 n/a 00:03:28.653 00:03:28.653 Elapsed time = 0.031 seconds 00:03:28.653 00:03:28.653 real 0m1.593s 00:03:28.653 user 0m1.326s 00:03:28.653 sys 0m0.240s 00:03:28.653 19:04:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:28.653 19:04:05 -- common/autotest_common.sh@10 -- # set +x 00:03:28.653 ************************************ 00:03:28.653 END TEST unittest_bdev 00:03:28.653 ************************************ 00:03:28.653 19:04:05 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:28.653 19:04:05 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:28.653 19:04:05 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:28.653 19:04:05 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:28.653 19:04:05 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:03:28.653 19:04:05 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:28.653 19:04:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:28.653 19:04:05 -- common/autotest_common.sh@10 -- # set +x 00:03:28.653 ************************************ 00:03:28.653 START TEST unittest_blob_blobfs 00:03:28.653 ************************************ 00:03:28.653 19:04:05 -- common/autotest_common.sh@1102 -- # unittest_blob 00:03:28.653 19:04:05 -- unit/unittest.sh@38 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:28.653 19:04:05 -- unit/unittest.sh@39 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:28.653 00:03:28.653 00:03:28.653 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.653 http://cunit.sourceforge.net/ 00:03:28.653 00:03:28.653 00:03:28.653 Suite: blob_nocopy_noextent 00:03:28.653 Test: blob_init ...[2024-02-14 19:04:05.940922] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:28.653 passed 00:03:28.653 Test: blob_thin_provision ...passed 00:03:28.653 Test: blob_read_only ...passed 00:03:28.653 Test: bs_load ...[2024-02-14 19:04:06.060426] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:28.653 passed 00:03:28.912 Test: bs_load_custom_cluster_size ...passed 00:03:28.912 Test: bs_load_after_failed_grow ...passed 00:03:28.912 Test: bs_cluster_sz ...[2024-02-14 19:04:06.097720] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:28.912 [2024-02-14 19:04:06.097810] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:28.912 [2024-02-14 19:04:06.097826] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:28.912 passed 00:03:28.912 Test: bs_resize_md ...passed 00:03:28.912 Test: bs_destroy ...passed 00:03:28.912 Test: bs_type ...passed 00:03:28.912 Test: bs_super_block ...passed 00:03:28.912 Test: bs_test_recover_cluster_count ...passed 00:03:28.912 Test: bs_grow_live ...passed 00:03:28.912 Test: bs_grow_live_no_space ...passed 00:03:28.912 Test: bs_test_grow ...passed 00:03:28.912 Test: blob_serialize_test ...passed 00:03:28.912 Test: super_block_crc ...passed 00:03:28.912 Test: blob_thin_prov_write_count_io ...passed 00:03:28.912 Test: bs_load_iter_test ...passed 00:03:28.912 Test: blob_relations ...[2024-02-14 19:04:06.316454] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:28.912 [2024-02-14 19:04:06.316544] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.912 [2024-02-14 19:04:06.316620] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:28.912 [2024-02-14 19:04:06.316628] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:28.912 passed 00:03:29.171 Test: blob_relations2 ...[2024-02-14 19:04:06.336686] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:29.171 [2024-02-14 19:04:06.336735] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.336744] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:29.171 [2024-02-14 19:04:06.336751] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.336897] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:29.171 [2024-02-14 19:04:06.336906] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.336944] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:29.171 [2024-02-14 19:04:06.336951] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 passed 00:03:29.171 Test: blob_relations3 ...passed 00:03:29.171 Test: blobstore_clean_power_failure ...passed 00:03:29.171 Test: blob_delete_snapshot_power_failure ...[2024-02-14 19:04:06.531249] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:29.171 [2024-02-14 19:04:06.541197] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:29.171 [2024-02-14 19:04:06.541249] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:29.171 [2024-02-14 19:04:06.541262] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.551155] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:29.171 [2024-02-14 19:04:06.551198] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:29.171 [2024-02-14 19:04:06.551209] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:29.171 [2024-02-14 19:04:06.551219] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.561194] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:29.171 [2024-02-14 19:04:06.561251] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.571149] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:29.171 [2024-02-14 19:04:06.571196] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.171 [2024-02-14 19:04:06.581135] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:29.171 [2024-02-14 19:04:06.581185] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:29.429 passed 00:03:29.429 Test: blob_create_snapshot_power_failure ...[2024-02-14 19:04:06.610647] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:29.429 [2024-02-14 19:04:06.630230] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:29.429 [2024-02-14 19:04:06.640102] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:29.429 passed 00:03:29.429 Test: blob_io_unit ...passed 00:03:29.429 Test: blob_io_unit_compatibility ...passed 00:03:29.429 Test: blob_ext_md_pages ...passed 00:03:29.429 Test: blob_esnap_io_4096_4096 ...passed 00:03:29.429 Test: blob_esnap_io_512_512 ...passed 00:03:29.429 Test: blob_esnap_io_4096_512 ...passed 00:03:29.429 Test: blob_esnap_io_512_4096 ...passed 00:03:29.429 Suite: blob_bs_nocopy_noextent 00:03:29.429 Test: blob_open ...passed 00:03:29.429 Test: blob_create ...[2024-02-14 19:04:06.825628] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:29.429 passed 00:03:29.687 Test: blob_create_loop ...passed 00:03:29.687 Test: blob_create_fail ...[2024-02-14 19:04:06.892376] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:29.687 passed 00:03:29.687 Test: blob_create_internal ...passed 00:03:29.687 Test: blob_create_zero_extent ...passed 00:03:29.687 Test: blob_snapshot ...passed 00:03:29.687 Test: blob_clone ...passed 00:03:29.687 Test: blob_inflate ...[2024-02-14 19:04:07.038203] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:29.687 passed 00:03:29.687 Test: blob_delete ...passed 00:03:29.687 Test: blob_resize_test ...[2024-02-14 19:04:07.093829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:29.687 passed 00:03:29.958 Test: channel_ops ...passed 00:03:29.958 Test: blob_super ...passed 00:03:29.958 Test: blob_rw_verify_iov ...passed 00:03:29.958 Test: blob_unmap ...passed 00:03:29.958 Test: blob_iter ...passed 00:03:29.958 Test: blob_parse_md ...passed 00:03:29.958 Test: bs_load_pending_removal ...passed 00:03:29.958 Test: bs_unload ...[2024-02-14 19:04:07.318015] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:29.958 passed 00:03:29.958 Test: bs_usable_clusters ...passed 00:03:30.251 Test: blob_crc ...[2024-02-14 19:04:07.374130] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:30.251 [2024-02-14 19:04:07.374207] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:30.251 passed 00:03:30.251 Test: blob_flags ...passed 00:03:30.251 Test: bs_version ...passed 00:03:30.251 Test: blob_set_xattrs_test ...[2024-02-14 19:04:07.458537] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:30.251 [2024-02-14 19:04:07.458597] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:30.251 passed 00:03:30.251 Test: blob_thin_prov_alloc ...passed 00:03:30.251 Test: blob_insert_cluster_msg_test ...passed 00:03:30.251 Test: blob_thin_prov_rw ...passed 00:03:30.251 Test: blob_thin_prov_rle ...passed 00:03:30.251 Test: blob_thin_prov_rw_iov ...passed 00:03:30.251 Test: blob_snapshot_rw ...passed 00:03:30.510 Test: blob_snapshot_rw_iov ...passed 00:03:30.510 Test: blob_inflate_rw ...passed 00:03:30.510 Test: blob_snapshot_freeze_io ...passed 00:03:30.510 Test: blob_operation_split_rw ...passed 00:03:30.510 Test: blob_operation_split_rw_iov ...passed 00:03:30.510 Test: blob_simultaneous_operations ...[2024-02-14 19:04:07.901213] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.510 [2024-02-14 19:04:07.901274] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.510 [2024-02-14 19:04:07.901521] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.510 [2024-02-14 19:04:07.901539] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.510 [2024-02-14 19:04:07.904592] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.510 [2024-02-14 19:04:07.904633] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.510 [2024-02-14 19:04:07.904650] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:30.510 [2024-02-14 19:04:07.904657] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:30.510 passed 00:03:30.769 Test: blob_persist_test ...passed 00:03:30.769 Test: blob_decouple_snapshot ...passed 00:03:30.769 Test: blob_seek_io_unit ...passed 00:03:30.769 Test: blob_nested_freezes ...passed 00:03:30.769 Suite: blob_blob_nocopy_noextent 00:03:30.770 Test: blob_write ...passed 00:03:30.770 Test: blob_read ...passed 00:03:30.770 Test: blob_rw_verify ...passed 00:03:30.770 Test: blob_rw_verify_iov_nomem ...passed 00:03:31.028 Test: blob_rw_iov_read_only ...passed 00:03:31.028 Test: blob_xattr ...passed 00:03:31.028 Test: blob_dirty_shutdown ...passed 00:03:31.028 Test: blob_is_degraded ...passed 00:03:31.028 Suite: blob_esnap_bs_nocopy_noextent 00:03:31.028 Test: blob_esnap_create ...passed 00:03:31.028 Test: blob_esnap_thread_add_remove ...passed 00:03:31.028 Test: blob_esnap_clone_snapshot ...passed 00:03:31.028 Test: blob_esnap_clone_inflate ...passed 00:03:31.028 Test: blob_esnap_clone_decouple ...passed 00:03:31.028 Test: blob_esnap_clone_reload ...passed 00:03:31.287 Test: blob_esnap_hotplug ...passed 00:03:31.287 Suite: blob_nocopy_extent 00:03:31.287 Test: blob_init ...[2024-02-14 19:04:08.469084] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:31.287 passed 00:03:31.287 Test: blob_thin_provision ...passed 00:03:31.287 Test: blob_read_only ...passed 00:03:31.287 Test: bs_load ...[2024-02-14 19:04:08.506694] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:31.287 passed 00:03:31.287 Test: bs_load_custom_cluster_size ...passed 00:03:31.287 Test: bs_load_after_failed_grow ...passed 00:03:31.287 Test: bs_cluster_sz ...[2024-02-14 19:04:08.525561] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:31.287 [2024-02-14 19:04:08.525618] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:31.287 [2024-02-14 19:04:08.525628] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:31.287 passed 00:03:31.287 Test: bs_resize_md ...passed 00:03:31.287 Test: bs_destroy ...passed 00:03:31.287 Test: bs_type ...passed 00:03:31.287 Test: bs_super_block ...passed 00:03:31.287 Test: bs_test_recover_cluster_count ...passed 00:03:31.287 Test: bs_grow_live ...passed 00:03:31.287 Test: bs_grow_live_no_space ...passed 00:03:31.287 Test: bs_test_grow ...passed 00:03:31.287 Test: blob_serialize_test ...passed 00:03:31.287 Test: super_block_crc ...passed 00:03:31.287 Test: blob_thin_prov_write_count_io ...passed 00:03:31.287 Test: bs_load_iter_test ...passed 00:03:31.287 Test: blob_relations ...[2024-02-14 19:04:08.638845] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:31.287 [2024-02-14 19:04:08.638923] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.287 [2024-02-14 19:04:08.638993] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:31.287 [2024-02-14 19:04:08.639002] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.287 passed 00:03:31.287 Test: blob_relations2 ...[2024-02-14 19:04:08.649074] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:31.287 [2024-02-14 19:04:08.649104] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.287 [2024-02-14 19:04:08.649112] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:31.287 [2024-02-14 19:04:08.649118] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.287 [2024-02-14 19:04:08.649199] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:31.287 [2024-02-14 19:04:08.649208] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.287 [2024-02-14 19:04:08.649237] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:31.287 [2024-02-14 19:04:08.649245] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.287 passed 00:03:31.287 Test: blob_relations3 ...passed 00:03:31.547 Test: blobstore_clean_power_failure ...passed 00:03:31.547 Test: blob_delete_snapshot_power_failure ...[2024-02-14 19:04:08.781170] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:31.547 [2024-02-14 19:04:08.790683] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:31.547 [2024-02-14 19:04:08.800108] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:31.547 [2024-02-14 19:04:08.800143] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:31.547 [2024-02-14 19:04:08.800151] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.547 [2024-02-14 19:04:08.809441] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:31.547 [2024-02-14 19:04:08.809492] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:31.547 [2024-02-14 19:04:08.809500] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:31.547 [2024-02-14 19:04:08.809507] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.547 [2024-02-14 19:04:08.818895] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:31.547 [2024-02-14 19:04:08.818924] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:31.547 [2024-02-14 19:04:08.818931] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:31.547 [2024-02-14 19:04:08.818938] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.547 [2024-02-14 19:04:08.828287] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:31.547 [2024-02-14 19:04:08.828310] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.547 [2024-02-14 19:04:08.837555] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:31.547 [2024-02-14 19:04:08.837579] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.547 [2024-02-14 19:04:08.846865] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:31.547 [2024-02-14 19:04:08.846894] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:31.547 passed 00:03:31.547 Test: blob_create_snapshot_power_failure ...[2024-02-14 19:04:08.874821] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:31.547 [2024-02-14 19:04:08.884265] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:31.547 [2024-02-14 19:04:08.902830] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:31.547 [2024-02-14 19:04:08.912470] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:31.547 passed 00:03:31.547 Test: blob_io_unit ...passed 00:03:31.547 Test: blob_io_unit_compatibility ...passed 00:03:31.807 Test: blob_ext_md_pages ...passed 00:03:31.807 Test: blob_esnap_io_4096_4096 ...passed 00:03:31.807 Test: blob_esnap_io_512_512 ...passed 00:03:31.807 Test: blob_esnap_io_4096_512 ...passed 00:03:31.807 Test: blob_esnap_io_512_4096 ...passed 00:03:31.807 Suite: blob_bs_nocopy_extent 00:03:31.807 Test: blob_open ...passed 00:03:31.807 Test: blob_create ...[2024-02-14 19:04:09.092992] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:31.807 passed 00:03:31.807 Test: blob_create_loop ...passed 00:03:31.807 Test: blob_create_fail ...[2024-02-14 19:04:09.161415] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:31.807 passed 00:03:31.807 Test: blob_create_internal ...passed 00:03:32.067 Test: blob_create_zero_extent ...passed 00:03:32.067 Test: blob_snapshot ...passed 00:03:32.067 Test: blob_clone ...passed 00:03:32.067 Test: blob_inflate ...[2024-02-14 19:04:09.307711] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:32.067 passed 00:03:32.067 Test: blob_delete ...passed 00:03:32.067 Test: blob_resize_test ...[2024-02-14 19:04:09.364154] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:32.067 passed 00:03:32.067 Test: channel_ops ...passed 00:03:32.067 Test: blob_super ...passed 00:03:32.067 Test: blob_rw_verify_iov ...passed 00:03:32.326 Test: blob_unmap ...passed 00:03:32.326 Test: blob_iter ...passed 00:03:32.326 Test: blob_parse_md ...passed 00:03:32.326 Test: bs_load_pending_removal ...passed 00:03:32.326 Test: bs_unload ...[2024-02-14 19:04:09.588309] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:32.326 passed 00:03:32.326 Test: bs_usable_clusters ...passed 00:03:32.326 Test: blob_crc ...[2024-02-14 19:04:09.643866] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:32.326 [2024-02-14 19:04:09.643923] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:32.326 passed 00:03:32.326 Test: blob_flags ...passed 00:03:32.326 Test: bs_version ...passed 00:03:32.326 Test: blob_set_xattrs_test ...[2024-02-14 19:04:09.727924] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:32.326 [2024-02-14 19:04:09.727982] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:32.326 passed 00:03:32.585 Test: blob_thin_prov_alloc ...passed 00:03:32.585 Test: blob_insert_cluster_msg_test ...passed 00:03:32.585 Test: blob_thin_prov_rw ...passed 00:03:32.585 Test: blob_thin_prov_rle ...passed 00:03:32.585 Test: blob_thin_prov_rw_iov ...passed 00:03:32.585 Test: blob_snapshot_rw ...passed 00:03:32.585 Test: blob_snapshot_rw_iov ...passed 00:03:32.585 Test: blob_inflate_rw ...passed 00:03:32.845 Test: blob_snapshot_freeze_io ...passed 00:03:32.845 Test: blob_operation_split_rw ...passed 00:03:32.845 Test: blob_operation_split_rw_iov ...passed 00:03:32.845 Test: blob_simultaneous_operations ...[2024-02-14 19:04:10.141033] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.845 [2024-02-14 19:04:10.141098] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.845 [2024-02-14 19:04:10.141333] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.845 [2024-02-14 19:04:10.141352] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.845 [2024-02-14 19:04:10.144349] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.845 [2024-02-14 19:04:10.144379] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.845 [2024-02-14 19:04:10.144394] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:32.845 [2024-02-14 19:04:10.144401] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.845 passed 00:03:32.845 Test: blob_persist_test ...passed 00:03:32.845 Test: blob_decouple_snapshot ...passed 00:03:32.845 Test: blob_seek_io_unit ...passed 00:03:33.104 Test: blob_nested_freezes ...passed 00:03:33.104 Suite: blob_blob_nocopy_extent 00:03:33.104 Test: blob_write ...passed 00:03:33.104 Test: blob_read ...passed 00:03:33.104 Test: blob_rw_verify ...passed 00:03:33.105 Test: blob_rw_verify_iov_nomem ...passed 00:03:33.105 Test: blob_rw_iov_read_only ...passed 00:03:33.105 Test: blob_xattr ...passed 00:03:33.105 Test: blob_dirty_shutdown ...passed 00:03:33.105 Test: blob_is_degraded ...passed 00:03:33.105 Suite: blob_esnap_bs_nocopy_extent 00:03:33.364 Test: blob_esnap_create ...passed 00:03:33.364 Test: blob_esnap_thread_add_remove ...passed 00:03:33.364 Test: blob_esnap_clone_snapshot ...passed 00:03:33.364 Test: blob_esnap_clone_inflate ...passed 00:03:33.364 Test: blob_esnap_clone_decouple ...passed 00:03:33.364 Test: blob_esnap_clone_reload ...passed 00:03:33.364 Test: blob_esnap_hotplug ...passed 00:03:33.364 Suite: blob_copy_noextent 00:03:33.364 Test: blob_init ...[2024-02-14 19:04:10.697974] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:33.364 passed 00:03:33.364 Test: blob_thin_provision ...passed 00:03:33.364 Test: blob_read_only ...passed 00:03:33.364 Test: bs_load ...[2024-02-14 19:04:10.735076] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:33.364 passed 00:03:33.364 Test: bs_load_custom_cluster_size ...passed 00:03:33.364 Test: bs_load_after_failed_grow ...passed 00:03:33.364 Test: bs_cluster_sz ...[2024-02-14 19:04:10.753886] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:33.364 [2024-02-14 19:04:10.753934] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:33.364 [2024-02-14 19:04:10.753946] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:33.364 passed 00:03:33.364 Test: bs_resize_md ...passed 00:03:33.624 Test: bs_destroy ...passed 00:03:33.624 Test: bs_type ...passed 00:03:33.624 Test: bs_super_block ...passed 00:03:33.624 Test: bs_test_recover_cluster_count ...passed 00:03:33.624 Test: bs_grow_live ...passed 00:03:33.624 Test: bs_grow_live_no_space ...passed 00:03:33.624 Test: bs_test_grow ...passed 00:03:33.624 Test: blob_serialize_test ...passed 00:03:33.624 Test: super_block_crc ...passed 00:03:33.624 Test: blob_thin_prov_write_count_io ...passed 00:03:33.624 Test: bs_load_iter_test ...passed 00:03:33.624 Test: blob_relations ...[2024-02-14 19:04:10.866636] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.624 [2024-02-14 19:04:10.866697] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 [2024-02-14 19:04:10.866755] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.624 [2024-02-14 19:04:10.866764] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 passed 00:03:33.624 Test: blob_relations2 ...[2024-02-14 19:04:10.876692] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.624 [2024-02-14 19:04:10.876726] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 [2024-02-14 19:04:10.876734] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.624 [2024-02-14 19:04:10.876741] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 [2024-02-14 19:04:10.876820] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.624 [2024-02-14 19:04:10.876830] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 [2024-02-14 19:04:10.876861] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:33.624 [2024-02-14 19:04:10.876868] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 passed 00:03:33.624 Test: blob_relations3 ...passed 00:03:33.624 Test: blobstore_clean_power_failure ...passed 00:03:33.624 Test: blob_delete_snapshot_power_failure ...[2024-02-14 19:04:11.008082] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:33.624 [2024-02-14 19:04:11.017523] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:33.624 [2024-02-14 19:04:11.017573] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.624 [2024-02-14 19:04:11.017582] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 [2024-02-14 19:04:11.027070] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:33.624 [2024-02-14 19:04:11.027117] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:33.624 [2024-02-14 19:04:11.027125] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:33.624 [2024-02-14 19:04:11.027133] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.624 [2024-02-14 19:04:11.036639] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:33.624 [2024-02-14 19:04:11.036665] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.883 [2024-02-14 19:04:11.046344] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:33.883 [2024-02-14 19:04:11.046389] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.883 [2024-02-14 19:04:11.055969] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:33.883 [2024-02-14 19:04:11.056026] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.883 passed 00:03:33.883 Test: blob_create_snapshot_power_failure ...[2024-02-14 19:04:11.084429] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:33.883 [2024-02-14 19:04:11.103153] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:33.883 [2024-02-14 19:04:11.112458] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:33.883 passed 00:03:33.883 Test: blob_io_unit ...passed 00:03:33.883 Test: blob_io_unit_compatibility ...passed 00:03:33.883 Test: blob_ext_md_pages ...passed 00:03:33.883 Test: blob_esnap_io_4096_4096 ...passed 00:03:33.883 Test: blob_esnap_io_512_512 ...passed 00:03:33.883 Test: blob_esnap_io_4096_512 ...passed 00:03:33.883 Test: blob_esnap_io_512_4096 ...passed 00:03:33.883 Suite: blob_bs_copy_noextent 00:03:33.883 Test: blob_open ...passed 00:03:33.883 Test: blob_create ...[2024-02-14 19:04:11.291200] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:34.143 passed 00:03:34.143 Test: blob_create_loop ...passed 00:03:34.143 Test: blob_create_fail ...[2024-02-14 19:04:11.357671] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.143 passed 00:03:34.143 Test: blob_create_internal ...passed 00:03:34.143 Test: blob_create_zero_extent ...passed 00:03:34.143 Test: blob_snapshot ...passed 00:03:34.143 Test: blob_clone ...passed 00:03:34.143 Test: blob_inflate ...[2024-02-14 19:04:11.500500] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:34.143 passed 00:03:34.143 Test: blob_delete ...passed 00:03:34.143 Test: blob_resize_test ...[2024-02-14 19:04:11.554831] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:34.402 passed 00:03:34.402 Test: channel_ops ...passed 00:03:34.402 Test: blob_super ...passed 00:03:34.402 Test: blob_rw_verify_iov ...passed 00:03:34.402 Test: blob_unmap ...passed 00:03:34.402 Test: blob_iter ...passed 00:03:34.402 Test: blob_parse_md ...passed 00:03:34.402 Test: bs_load_pending_removal ...passed 00:03:34.402 Test: bs_unload ...[2024-02-14 19:04:11.777450] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:34.402 passed 00:03:34.402 Test: bs_usable_clusters ...passed 00:03:34.661 Test: blob_crc ...[2024-02-14 19:04:11.833029] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:34.661 [2024-02-14 19:04:11.833080] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:34.661 passed 00:03:34.661 Test: blob_flags ...passed 00:03:34.661 Test: bs_version ...passed 00:03:34.661 Test: blob_set_xattrs_test ...[2024-02-14 19:04:11.916804] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.661 [2024-02-14 19:04:11.916860] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.661 passed 00:03:34.661 Test: blob_thin_prov_alloc ...passed 00:03:34.661 Test: blob_insert_cluster_msg_test ...passed 00:03:34.661 Test: blob_thin_prov_rw ...passed 00:03:34.661 Test: blob_thin_prov_rle ...passed 00:03:34.661 Test: blob_thin_prov_rw_iov ...passed 00:03:34.920 Test: blob_snapshot_rw ...passed 00:03:34.920 Test: blob_snapshot_rw_iov ...passed 00:03:34.920 Test: blob_inflate_rw ...passed 00:03:34.920 Test: blob_snapshot_freeze_io ...passed 00:03:34.920 Test: blob_operation_split_rw ...passed 00:03:34.920 Test: blob_operation_split_rw_iov ...passed 00:03:34.920 Test: blob_simultaneous_operations ...[2024-02-14 19:04:12.334437] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.920 [2024-02-14 19:04:12.334497] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.920 [2024-02-14 19:04:12.334738] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.920 [2024-02-14 19:04:12.334757] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.920 [2024-02-14 19:04:12.336689] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.920 [2024-02-14 19:04:12.336720] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.920 [2024-02-14 19:04:12.336736] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:34.920 [2024-02-14 19:04:12.336744] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.179 passed 00:03:35.179 Test: blob_persist_test ...passed 00:03:35.179 Test: blob_decouple_snapshot ...passed 00:03:35.179 Test: blob_seek_io_unit ...passed 00:03:35.179 Test: blob_nested_freezes ...passed 00:03:35.179 Suite: blob_blob_copy_noextent 00:03:35.179 Test: blob_write ...passed 00:03:35.179 Test: blob_read ...passed 00:03:35.179 Test: blob_rw_verify ...passed 00:03:35.179 Test: blob_rw_verify_iov_nomem ...passed 00:03:35.439 Test: blob_rw_iov_read_only ...passed 00:03:35.439 Test: blob_xattr ...passed 00:03:35.439 Test: blob_dirty_shutdown ...passed 00:03:35.439 Test: blob_is_degraded ...passed 00:03:35.439 Suite: blob_esnap_bs_copy_noextent 00:03:35.439 Test: blob_esnap_create ...passed 00:03:35.439 Test: blob_esnap_thread_add_remove ...passed 00:03:35.439 Test: blob_esnap_clone_snapshot ...passed 00:03:35.439 Test: blob_esnap_clone_inflate ...passed 00:03:35.439 Test: blob_esnap_clone_decouple ...passed 00:03:35.698 Test: blob_esnap_clone_reload ...passed 00:03:35.698 Test: blob_esnap_hotplug ...passed 00:03:35.698 Suite: blob_copy_extent 00:03:35.698 Test: blob_init ...[2024-02-14 19:04:12.894300] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:35.698 passed 00:03:35.698 Test: blob_thin_provision ...passed 00:03:35.698 Test: blob_read_only ...passed 00:03:35.698 Test: bs_load ...[2024-02-14 19:04:12.931009] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:35.698 passed 00:03:35.698 Test: bs_load_custom_cluster_size ...passed 00:03:35.698 Test: bs_load_after_failed_grow ...passed 00:03:35.698 Test: bs_cluster_sz ...[2024-02-14 19:04:12.949480] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:35.698 [2024-02-14 19:04:12.949522] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:35.698 [2024-02-14 19:04:12.949533] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:35.698 passed 00:03:35.698 Test: bs_resize_md ...passed 00:03:35.698 Test: bs_destroy ...passed 00:03:35.698 Test: bs_type ...passed 00:03:35.698 Test: bs_super_block ...passed 00:03:35.698 Test: bs_test_recover_cluster_count ...passed 00:03:35.698 Test: bs_grow_live ...passed 00:03:35.698 Test: bs_grow_live_no_space ...passed 00:03:35.698 Test: bs_test_grow ...passed 00:03:35.698 Test: blob_serialize_test ...passed 00:03:35.698 Test: super_block_crc ...passed 00:03:35.698 Test: blob_thin_prov_write_count_io ...passed 00:03:35.698 Test: bs_load_iter_test ...passed 00:03:35.698 Test: blob_relations ...[2024-02-14 19:04:13.060175] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.698 [2024-02-14 19:04:13.060230] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.698 [2024-02-14 19:04:13.060290] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.698 [2024-02-14 19:04:13.060298] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.698 passed 00:03:35.698 Test: blob_relations2 ...[2024-02-14 19:04:13.070006] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.698 [2024-02-14 19:04:13.070047] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.698 [2024-02-14 19:04:13.070054] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.698 [2024-02-14 19:04:13.070060] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.698 [2024-02-14 19:04:13.070143] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.698 [2024-02-14 19:04:13.070152] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.698 [2024-02-14 19:04:13.070180] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:35.698 [2024-02-14 19:04:13.070187] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.698 passed 00:03:35.698 Test: blob_relations3 ...passed 00:03:35.957 Test: blobstore_clean_power_failure ...passed 00:03:35.957 Test: blob_delete_snapshot_power_failure ...[2024-02-14 19:04:13.200625] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:35.957 [2024-02-14 19:04:13.210039] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:35.957 [2024-02-14 19:04:13.219334] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:35.957 [2024-02-14 19:04:13.219377] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:35.957 [2024-02-14 19:04:13.219385] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.957 [2024-02-14 19:04:13.228647] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:35.957 [2024-02-14 19:04:13.228679] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:35.957 [2024-02-14 19:04:13.228686] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:35.957 [2024-02-14 19:04:13.228693] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.957 [2024-02-14 19:04:13.237894] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:35.957 [2024-02-14 19:04:13.237913] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:35.957 [2024-02-14 19:04:13.237920] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:35.957 [2024-02-14 19:04:13.237926] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.957 [2024-02-14 19:04:13.247238] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:35.957 [2024-02-14 19:04:13.247264] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.957 [2024-02-14 19:04:13.256708] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:35.957 [2024-02-14 19:04:13.256732] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.957 [2024-02-14 19:04:13.266068] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:35.957 [2024-02-14 19:04:13.266094] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.957 passed 00:03:35.957 Test: blob_create_snapshot_power_failure ...[2024-02-14 19:04:13.293710] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:35.957 [2024-02-14 19:04:13.302956] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:35.957 [2024-02-14 19:04:13.321530] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:35.957 [2024-02-14 19:04:13.330814] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:35.957 passed 00:03:35.957 Test: blob_io_unit ...passed 00:03:36.216 Test: blob_io_unit_compatibility ...passed 00:03:36.216 Test: blob_ext_md_pages ...passed 00:03:36.216 Test: blob_esnap_io_4096_4096 ...passed 00:03:36.216 Test: blob_esnap_io_512_512 ...passed 00:03:36.216 Test: blob_esnap_io_4096_512 ...passed 00:03:36.216 Test: blob_esnap_io_512_4096 ...passed 00:03:36.216 Suite: blob_bs_copy_extent 00:03:36.216 Test: blob_open ...passed 00:03:36.216 Test: blob_create ...[2024-02-14 19:04:13.508264] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:36.216 passed 00:03:36.216 Test: blob_create_loop ...passed 00:03:36.216 Test: blob_create_fail ...[2024-02-14 19:04:13.576488] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:36.216 passed 00:03:36.216 Test: blob_create_internal ...passed 00:03:36.475 Test: blob_create_zero_extent ...passed 00:03:36.475 Test: blob_snapshot ...passed 00:03:36.475 Test: blob_clone ...passed 00:03:36.475 Test: blob_inflate ...[2024-02-14 19:04:13.756056] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:36.475 passed 00:03:36.475 Test: blob_delete ...passed 00:03:36.475 Test: blob_resize_test ...[2024-02-14 19:04:13.865485] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:36.475 passed 00:03:36.734 Test: channel_ops ...passed 00:03:36.734 Test: blob_super ...passed 00:03:36.734 Test: blob_rw_verify_iov ...passed 00:03:36.734 Test: blob_unmap ...passed 00:03:36.994 Test: blob_iter ...passed 00:03:36.994 Test: blob_parse_md ...passed 00:03:36.994 Test: bs_load_pending_removal ...passed 00:03:36.994 Test: bs_unload ...[2024-02-14 19:04:14.304229] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:36.994 passed 00:03:36.994 Test: bs_usable_clusters ...passed 00:03:37.252 Test: blob_crc ...[2024-02-14 19:04:14.413139] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:37.252 [2024-02-14 19:04:14.413218] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:37.252 passed 00:03:37.252 Test: blob_flags ...passed 00:03:37.253 Test: bs_version ...passed 00:03:37.253 Test: blob_set_xattrs_test ...[2024-02-14 19:04:14.577834] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:37.253 [2024-02-14 19:04:14.577924] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:37.253 passed 00:03:37.253 Test: blob_thin_prov_alloc ...passed 00:03:37.511 Test: blob_insert_cluster_msg_test ...passed 00:03:37.511 Test: blob_thin_prov_rw ...passed 00:03:37.511 Test: blob_thin_prov_rle ...passed 00:03:37.511 Test: blob_thin_prov_rw_iov ...passed 00:03:37.770 Test: blob_snapshot_rw ...passed 00:03:37.770 Test: blob_snapshot_rw_iov ...passed 00:03:37.770 Test: blob_inflate_rw ...passed 00:03:37.770 Test: blob_snapshot_freeze_io ...passed 00:03:38.029 Test: blob_operation_split_rw ...passed 00:03:38.029 Test: blob_operation_split_rw_iov ...passed 00:03:38.030 Test: blob_simultaneous_operations ...[2024-02-14 19:04:15.345463] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:38.030 [2024-02-14 19:04:15.345547] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.030 [2024-02-14 19:04:15.346010] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:38.030 [2024-02-14 19:04:15.346037] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.030 [2024-02-14 19:04:15.350085] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:38.030 [2024-02-14 19:04:15.350129] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.030 [2024-02-14 19:04:15.350159] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:38.030 [2024-02-14 19:04:15.350172] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.030 passed 00:03:38.030 Test: blob_persist_test ...passed 00:03:38.332 Test: blob_decouple_snapshot ...passed 00:03:38.332 Test: blob_seek_io_unit ...passed 00:03:38.332 Test: blob_nested_freezes ...passed 00:03:38.332 Suite: blob_blob_copy_extent 00:03:38.332 Test: blob_write ...passed 00:03:38.332 Test: blob_read ...passed 00:03:38.591 Test: blob_rw_verify ...passed 00:03:38.591 Test: blob_rw_verify_iov_nomem ...passed 00:03:38.591 Test: blob_rw_iov_read_only ...passed 00:03:38.591 Test: blob_xattr ...passed 00:03:38.850 Test: blob_dirty_shutdown ...passed 00:03:38.850 Test: blob_is_degraded ...passed 00:03:38.850 Suite: blob_esnap_bs_copy_extent 00:03:38.850 Test: blob_esnap_create ...passed 00:03:38.850 Test: blob_esnap_thread_add_remove ...passed 00:03:38.850 Test: blob_esnap_clone_snapshot ...passed 00:03:39.109 Test: blob_esnap_clone_inflate ...passed 00:03:39.109 Test: blob_esnap_clone_decouple ...passed 00:03:39.109 Test: blob_esnap_clone_reload ...passed 00:03:39.109 Test: blob_esnap_hotplug ...passed 00:03:39.109 00:03:39.109 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.109 suites 16 16 n/a 0 0 00:03:39.109 tests 348 348 348 0 0 00:03:39.109 asserts 92605 92605 92605 0 n/a 00:03:39.109 00:03:39.109 Elapsed time = 10.516 seconds 00:03:39.109 19:04:16 -- unit/unittest.sh@41 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:39.109 00:03:39.109 00:03:39.109 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.109 http://cunit.sourceforge.net/ 00:03:39.109 00:03:39.109 00:03:39.109 Suite: blob_bdev 00:03:39.109 Test: create_bs_dev ...passed 00:03:39.109 Test: create_bs_dev_ro ...passed 00:03:39.109 Test: create_bs_dev_rw ...passed 00:03:39.109 Test: claim_bs_dev ...passed 00:03:39.109 Test: claim_bs_dev_ro ...passed 00:03:39.109 Test: deferred_destroy_refs ...passed 00:03:39.109 Test: deferred_destroy_channels ...passed 00:03:39.109 Test: deferred_destroy_threads ...passed 00:03:39.109 00:03:39.109 [2024-02-14 19:04:16.465180] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:39.109 [2024-02-14 19:04:16.465582] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:39.109 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.109 suites 1 1 n/a 0 0 00:03:39.109 tests 8 8 8 0 0 00:03:39.109 asserts 119 119 119 0 n/a 00:03:39.109 00:03:39.109 Elapsed time = 0.000 seconds 00:03:39.109 19:04:16 -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:39.109 00:03:39.109 00:03:39.109 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.109 http://cunit.sourceforge.net/ 00:03:39.109 00:03:39.109 00:03:39.109 Suite: tree 00:03:39.109 Test: blobfs_tree_op_test ...passed 00:03:39.109 00:03:39.109 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.109 suites 1 1 n/a 0 0 00:03:39.109 tests 1 1 1 0 0 00:03:39.109 asserts 27 27 27 0 n/a 00:03:39.109 00:03:39.109 Elapsed time = 0.000 seconds 00:03:39.109 19:04:16 -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:39.109 00:03:39.109 00:03:39.109 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.109 http://cunit.sourceforge.net/ 00:03:39.109 00:03:39.109 00:03:39.109 Suite: blobfs_async_ut 00:03:39.368 Test: fs_init ...passed 00:03:39.368 Test: fs_open ...passed 00:03:39.368 Test: fs_create ...passed 00:03:39.368 Test: fs_truncate ...passed 00:03:39.368 Test: fs_rename ...[2024-02-14 19:04:16.621574] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:39.368 passed 00:03:39.368 Test: fs_rw_async ...passed 00:03:39.368 Test: fs_writev_readv_async ...passed 00:03:39.368 Test: tree_find_buffer_ut ...passed 00:03:39.368 Test: channel_ops ...passed 00:03:39.368 Test: channel_ops_sync ...passed 00:03:39.368 00:03:39.368 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.368 suites 1 1 n/a 0 0 00:03:39.368 tests 10 10 10 0 0 00:03:39.368 asserts 292 292 292 0 n/a 00:03:39.368 00:03:39.368 Elapsed time = 0.219 seconds 00:03:39.368 19:04:16 -- unit/unittest.sh@45 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:39.368 00:03:39.368 00:03:39.368 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.368 http://cunit.sourceforge.net/ 00:03:39.368 00:03:39.368 00:03:39.368 Suite: blobfs_sync_ut 00:03:39.369 Test: cache_read_after_write ...[2024-02-14 19:04:16.775857] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:39.369 passed 00:03:39.628 Test: file_length ...passed 00:03:39.628 Test: append_write_to_extend_blob ...passed 00:03:39.628 Test: partial_buffer ...passed 00:03:39.628 Test: cache_write_null_buffer ...passed 00:03:39.628 Test: fs_create_sync ...passed 00:03:39.628 Test: fs_rename_sync ...passed 00:03:39.628 Test: cache_append_no_cache ...passed 00:03:39.628 Test: fs_delete_file_without_close ...passed 00:03:39.628 00:03:39.628 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.628 suites 1 1 n/a 0 0 00:03:39.628 tests 9 9 9 0 0 00:03:39.628 asserts 345 345 345 0 n/a 00:03:39.628 00:03:39.628 Elapsed time = 0.438 seconds 00:03:39.628 19:04:16 -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:39.628 00:03:39.628 00:03:39.628 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.628 http://cunit.sourceforge.net/ 00:03:39.628 00:03:39.628 00:03:39.628 Suite: blobfs_bdev_ut 00:03:39.628 Test: spdk_blobfs_bdev_detect_test ...passed 00:03:39.628 Test: spdk_blobfs_bdev_create_test ...passed 00:03:39.628 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:39.628 00:03:39.628 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.628 suites 1 1 n/a 0 0 00:03:39.628 tests 3 3 3 0 0 00:03:39.628 asserts 9 9 9 0 n/a 00:03:39.628 00:03:39.629 Elapsed time = 0.000 seconds 00:03:39.629 [2024-02-14 19:04:16.942543] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:39.629 [2024-02-14 19:04:16.942905] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:39.629 00:03:39.629 real 0m11.012s 00:03:39.629 user 0m10.999s 00:03:39.629 sys 0m0.230s 00:03:39.629 19:04:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.629 ************************************ 00:03:39.629 END TEST unittest_blob_blobfs 00:03:39.629 ************************************ 00:03:39.629 19:04:16 -- common/autotest_common.sh@10 -- # set +x 00:03:39.629 19:04:16 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:03:39.629 19:04:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:39.629 19:04:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:39.629 19:04:16 -- common/autotest_common.sh@10 -- # set +x 00:03:39.629 ************************************ 00:03:39.629 START TEST unittest_event 00:03:39.629 ************************************ 00:03:39.629 19:04:16 -- common/autotest_common.sh@1102 -- # unittest_event 00:03:39.629 19:04:16 -- unit/unittest.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:39.629 00:03:39.629 00:03:39.629 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.629 http://cunit.sourceforge.net/ 00:03:39.629 00:03:39.629 00:03:39.629 Suite: app_suite 00:03:39.629 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:39.629 options: 00:03:39.629 -c, --config JSON config file (default none) 00:03:39.629 --json JSON config file (default none) 00:03:39.629 --json-ignore-init-errors 00:03:39.629 don't exit on invalid config entry 00:03:39.629 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:39.629 -g, --single-file-segments 00:03:39.629 force creating just one hugetlbfs file 00:03:39.629 -h, --help show this usage 00:03:39.629 -i, --shm-id shared memory ID (optional) 00:03:39.629 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:39.629 app_ut: invalid option -- z 00:03:39.629 --lcores lcore to CPU mapping list. The list is in the format: 00:03:39.629 [<,lcores[@CPUs]>...] 00:03:39.629 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:39.629 Within the group, '-' is used for range separator, 00:03:39.629 ',' is used for single number separator. 00:03:39.629 '( )' can be omitted for single element group, 00:03:39.629 '@' can be omitted if cpus and lcores have the same value 00:03:39.629 -n, --mem-channels channel number of memory channels used for DPDK 00:03:39.629 -p, --main-core main (primary) core for DPDK 00:03:39.629 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:39.629 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:39.629 --disable-cpumask-locks Disable CPU core lock files. 00:03:39.629 --silence-noticelog disable notice level logging to stderr 00:03:39.629 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:39.629 -u, --no-pci disable PCI access 00:03:39.629 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:39.629 --max-delay maximum reactor delay (in microseconds) 00:03:39.629 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:39.629 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:39.629 -R, --huge-unlink unlink huge files after initialization 00:03:39.629 -v, --version print SPDK version 00:03:39.629 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:39.629 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:39.629 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:39.629 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:39.629 Tracepoints vary in size and can use more than one trace entry. 00:03:39.629 --rpcs-allowed comma-separated list of permitted RPCS 00:03:39.629 --env-context Opaque context for use of the env implementation 00:03:39.629 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:39.629 --no-huge run without using hugepages 00:03:39.629 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:03:39.629 -e, --tpoint-group [:] 00:03:39.629 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:39.629 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:39.629 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:39.629 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:39.629 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:39.629 app_ut [options] 00:03:39.629 options: 00:03:39.629 -c, --config JSON config file (default none) 00:03:39.629 --json JSON config file (default none) 00:03:39.629 --json-ignore-init-errors 00:03:39.629 don't exit on invalid config entry 00:03:39.629 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:39.629 -g, --single-file-segments 00:03:39.629 force creating just one hugetlbfs file 00:03:39.629 -h, --help show this usage 00:03:39.629 -i, --shm-id shared memory ID (optional) 00:03:39.629 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:39.629 --lcores lcore to CPU mapping list. The list is in the format: 00:03:39.629 [<,lcores[@CPUs]>...] 00:03:39.629 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:39.629 Within the group, '-' is used for range separator, 00:03:39.629 ',' is used for single number separator. 00:03:39.629 '( )' can be omitted for single element group, 00:03:39.629 '@' can be omitted if cpus and lcores have the same value 00:03:39.629 -n, --mem-channels channel number of memory channels used for DPDK 00:03:39.629 -p, --main-core main (primary) core for DPDK 00:03:39.629 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:39.629 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:39.629 --disable-cpumask-locks Disable CPU core lock files. 00:03:39.629 --silence-noticelog disable notice level logging to stderr 00:03:39.629 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:39.629 -u, --no-pci disable PCI access 00:03:39.629 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:39.629 --max-delay maximum reactor delay (in microseconds) 00:03:39.629 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:39.629 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:39.629 -R, --huge-unlink unlink huge files after initialization 00:03:39.629 -v, --version print SPDK version 00:03:39.629 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:39.629 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:39.629 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:39.629 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:39.629 Tracepoints vary in size and can use more than one trace entry. 00:03:39.629 --rpcs-allowed comma-separated list of permitted RPCS 00:03:39.629 --env-context Opaque context for use of the env implementation 00:03:39.629 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:39.629 --no-huge run without using hugepages 00:03:39.629 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:03:39.629 -e, --tpoint-group [:] 00:03:39.629 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:39.629 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:39.629 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:39.629 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:39.629 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:39.629 app_ut: unrecognized option `--test-long-opt' 00:03:39.629 [2024-02-14 19:04:16.992365] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1029:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:39.629 app_ut [options] 00:03:39.629 options: 00:03:39.629 -c, --config JSON config file (default none) 00:03:39.629 --json JSON config file (default none) 00:03:39.629 --json-ignore-init-errors 00:03:39.629 don't exit on invalid config entry 00:03:39.629 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:39.629 -g, --single-file-segments 00:03:39.629 force creating just one hugetlbfs file 00:03:39.629 -h, --help show this usage 00:03:39.629 -i, --shm-id shared memory ID (optional) 00:03:39.629 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:39.629 --lcores lcore to CPU mapping list. The list is in the format: 00:03:39.629 [<,lcores[@CPUs]>...] 00:03:39.629 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:39.629 Within the group, '-' is used for range separator, 00:03:39.629 ',' is used for single number separator. 00:03:39.629 '( )' can be omitted for single element group, 00:03:39.630 '@' can be omitted if cpus and lcores have the same value 00:03:39.630 -n, --mem-channels channel number of memory channels used for DPDK 00:03:39.630 -p, --main-core main (primary) core for DPDK 00:03:39.630 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:39.630 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:39.630 --disable-cpumask-locks Disable CPU core lock files. 00:03:39.630 --silence-noticelog disable notice level logging to stderr 00:03:39.630 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:39.630 -u, --no-pci disable PCI access 00:03:39.630 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:39.630 --max-delay maximum reactor delay (in microseconds) 00:03:39.630 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:39.630 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:39.630 -R, --huge-unlink unlink huge files after initialization 00:03:39.630 -v, --version print SPDK version 00:03:39.630 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:39.630 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:39.630 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:39.630 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:39.630 Tracepoints vary in size and can use more than one trace entry. 00:03:39.630 --rpcs-allowed comma-separated list of permitted RPCS 00:03:39.630 --env-context Opaque context for use of the env implementation 00:03:39.630 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:39.630 --no-huge run without using hugepages 00:03:39.630 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:03:39.630 -e, --tpoint-group [:] 00:03:39.630 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:39.630 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:39.630 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:39.630 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:39.630 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:39.630 passed 00:03:39.630 00:03:39.630 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.630 suites 1 1 n/a 0 0 00:03:39.630 tests 1 1 1 0 0 00:03:39.630 asserts 8 8 8 0 n/a 00:03:39.630 00:03:39.630 Elapsed time = 0.000 seconds 00:03:39.630 [2024-02-14 19:04:16.992686] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1209:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:39.630 [2024-02-14 19:04:16.992795] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1114:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:39.630 19:04:16 -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:39.630 00:03:39.630 00:03:39.630 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.630 http://cunit.sourceforge.net/ 00:03:39.630 00:03:39.630 00:03:39.630 Suite: app_suite 00:03:39.630 Test: test_create_reactor ...passed 00:03:39.630 Test: test_init_reactors ...passed 00:03:39.630 Test: test_event_call ...passed 00:03:39.630 Test: test_schedule_thread ...passed 00:03:39.630 Test: test_reschedule_thread ...passed 00:03:39.630 Test: test_bind_thread ...passed 00:03:39.630 Test: test_for_each_reactor ...passed 00:03:39.630 Test: test_reactor_stats ...passed 00:03:39.630 Test: test_scheduler ...passed 00:03:39.630 Test: test_governor ...passed 00:03:39.630 00:03:39.630 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.630 suites 1 1 n/a 0 0 00:03:39.630 tests 10 10 10 0 0 00:03:39.630 asserts 336 336 336 0 n/a 00:03:39.630 00:03:39.630 Elapsed time = 0.008 seconds 00:03:39.630 00:03:39.630 real 0m0.020s 00:03:39.630 user 0m0.007s 00:03:39.630 sys 0m0.015s 00:03:39.630 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.630 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.630 ************************************ 00:03:39.630 END TEST unittest_event 00:03:39.630 ************************************ 00:03:39.630 19:04:17 -- unit/unittest.sh@233 -- # uname -s 00:03:39.630 19:04:17 -- unit/unittest.sh@233 -- # '[' FreeBSD = Linux ']' 00:03:39.630 19:04:17 -- unit/unittest.sh@237 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:39.630 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:39.630 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:39.630 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.890 ************************************ 00:03:39.890 START TEST unittest_accel 00:03:39.890 ************************************ 00:03:39.890 19:04:17 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:39.890 00:03:39.890 00:03:39.890 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.890 http://cunit.sourceforge.net/ 00:03:39.890 00:03:39.890 00:03:39.890 Suite: accel_sequence 00:03:39.890 Test: test_sequence_fill_copy ...passed 00:03:39.890 Test: test_sequence_abort ...passed 00:03:39.890 Test: test_sequence_append_error ...passed 00:03:39.890 Test: test_sequence_completion_error ...[2024-02-14 19:04:17.054876] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82d2a0180 00:03:39.890 passed 00:03:39.890 Test: test_sequence_decompress ...[2024-02-14 19:04:17.055240] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82d2a0180 00:03:39.890 [2024-02-14 19:04:17.055260] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82d2a0180 00:03:39.890 [2024-02-14 19:04:17.055274] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82d2a0180 00:03:39.890 passed 00:03:39.890 Test: test_sequence_reverse ...passed 00:03:39.890 Test: test_sequence_copy_elision ...passed 00:03:39.890 Test: test_sequence_accel_buffers ...passed 00:03:39.890 Test: test_sequence_memory_domain ...[2024-02-14 19:04:17.056564] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1729:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:39.890 [2024-02-14 19:04:17.056609] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1768:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:39.890 passed 00:03:39.890 Test: test_sequence_module_memory_domain ...passed 00:03:39.890 Test: test_sequence_crypto ...passed 00:03:39.890 Test: test_sequence_driver ...passed 00:03:39.890 Test: test_sequence_same_iovs ...[2024-02-14 19:04:17.057206] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1876:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82d2a0140 using driver: ut 00:03:39.890 [2024-02-14 19:04:17.057235] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1941:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82d2a0140 through driver: ut 00:03:39.890 passed 00:03:39.890 Test: test_sequence_crc32 ...passed 00:03:39.890 Suite: accel 00:03:39.890 Test: test_spdk_accel_task_complete ...passed 00:03:39.890 Test: test_get_task ...passed 00:03:39.890 Test: test_spdk_accel_submit_copy ...passed 00:03:39.890 Test: test_spdk_accel_submit_dualcast ...passed 00:03:39.890 Test: test_spdk_accel_submit_compare ...passed 00:03:39.890 Test: test_spdk_accel_submit_fill ...passed 00:03:39.890 Test: test_spdk_accel_submit_crc32c ...passed 00:03:39.890 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:39.890 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:39.890 Test: test_spdk_accel_submit_xor ...passed 00:03:39.890 Test: test_spdk_accel_module_find_by_name ...passed 00:03:39.890 Test: test_spdk_accel_module_register ...[2024-02-14 19:04:17.057789] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:39.890 [2024-02-14 19:04:17.057823] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:39.890 passed 00:03:39.890 00:03:39.890 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.890 suites 2 2 n/a 0 0 00:03:39.890 tests 26 26 26 0 0 00:03:39.890 asserts 831 831 831 0 n/a 00:03:39.890 00:03:39.890 Elapsed time = 0.008 seconds 00:03:39.890 00:03:39.890 real 0m0.014s 00:03:39.890 user 0m0.014s 00:03:39.890 sys 0m0.000s 00:03:39.890 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.890 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.890 ************************************ 00:03:39.890 END TEST unittest_accel 00:03:39.890 ************************************ 00:03:39.890 19:04:17 -- unit/unittest.sh@238 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:39.890 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:39.890 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:39.890 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.890 ************************************ 00:03:39.890 START TEST unittest_ioat 00:03:39.890 ************************************ 00:03:39.890 19:04:17 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:39.890 00:03:39.890 00:03:39.891 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.891 http://cunit.sourceforge.net/ 00:03:39.891 00:03:39.891 00:03:39.891 Suite: ioat 00:03:39.891 Test: ioat_state_check ...passed 00:03:39.891 00:03:39.891 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.891 suites 1 1 n/a 0 0 00:03:39.891 tests 1 1 1 0 0 00:03:39.891 asserts 32 32 32 0 n/a 00:03:39.891 00:03:39.891 Elapsed time = 0.000 seconds 00:03:39.891 00:03:39.891 real 0m0.006s 00:03:39.891 user 0m0.005s 00:03:39.891 sys 0m0.004s 00:03:39.891 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.891 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.891 ************************************ 00:03:39.891 END TEST unittest_ioat 00:03:39.891 ************************************ 00:03:39.891 19:04:17 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:39.891 19:04:17 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:39.891 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:39.891 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:39.891 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.891 ************************************ 00:03:39.891 START TEST unittest_idxd_user 00:03:39.891 ************************************ 00:03:39.891 19:04:17 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:39.891 00:03:39.891 00:03:39.891 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.891 http://cunit.sourceforge.net/ 00:03:39.891 00:03:39.891 00:03:39.891 Suite: idxd_user 00:03:39.891 Test: test_idxd_wait_cmd ...passed 00:03:39.891 Test: test_idxd_reset_dev ...passed 00:03:39.891 Test: test_idxd_group_config ...passed 00:03:39.891 Test: test_idxd_wq_config ...passed 00:03:39.891 00:03:39.891 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.891 suites 1 1 n/a 0 0 00:03:39.891 tests 4 4 4 0 0 00:03:39.891 asserts 20 20 20 0 n/a 00:03:39.891 00:03:39.891 Elapsed time = 0.000 seconds 00:03:39.891 [2024-02-14 19:04:17.152512] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:39.891 [2024-02-14 19:04:17.152724] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:39.891 [2024-02-14 19:04:17.152745] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:39.891 [2024-02-14 19:04:17.152753] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:39.891 00:03:39.891 real 0m0.006s 00:03:39.891 user 0m0.006s 00:03:39.891 sys 0m0.000s 00:03:39.891 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.891 ************************************ 00:03:39.891 END TEST unittest_idxd_user 00:03:39.891 ************************************ 00:03:39.891 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.891 19:04:17 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:03:39.891 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:39.891 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:39.891 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.891 ************************************ 00:03:39.891 START TEST unittest_iscsi 00:03:39.891 ************************************ 00:03:39.891 19:04:17 -- common/autotest_common.sh@1102 -- # unittest_iscsi 00:03:39.891 19:04:17 -- unit/unittest.sh@66 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:39.891 00:03:39.891 00:03:39.891 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.891 http://cunit.sourceforge.net/ 00:03:39.891 00:03:39.891 00:03:39.891 Suite: conn_suite 00:03:39.891 Test: read_task_split_in_order_case ...passed 00:03:39.891 Test: read_task_split_reverse_order_case ...passed 00:03:39.891 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:39.891 Test: process_non_read_task_completion_test ...passed 00:03:39.891 Test: free_tasks_on_connection ...passed 00:03:39.891 Test: free_tasks_with_queued_datain ...passed 00:03:39.891 Test: abort_queued_datain_task_test ...passed 00:03:39.891 Test: abort_queued_datain_tasks_test ...passed 00:03:39.891 00:03:39.891 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.891 suites 1 1 n/a 0 0 00:03:39.891 tests 8 8 8 0 0 00:03:39.891 asserts 230 230 230 0 n/a 00:03:39.891 00:03:39.891 Elapsed time = 0.000 seconds 00:03:39.891 19:04:17 -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:39.891 00:03:39.891 00:03:39.891 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.891 http://cunit.sourceforge.net/ 00:03:39.891 00:03:39.891 00:03:39.891 Suite: iscsi_suite 00:03:39.891 Test: param_negotiation_test ...passed 00:03:39.891 Test: list_negotiation_test ...passed 00:03:39.891 Test: parse_valid_test ...passed 00:03:39.891 Test: parse_invalid_test ...[2024-02-14 19:04:17.196776] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:39.891 passed 00:03:39.891 00:03:39.891 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.891 suites 1 1 n/a 0 0 00:03:39.891 tests 4 4 4 0 0 00:03:39.891 asserts 161 161 161 0 n/a 00:03:39.891 00:03:39.891 Elapsed time = 0.000 seconds 00:03:39.891 [2024-02-14 19:04:17.197000] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:39.891 [2024-02-14 19:04:17.197016] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:39.891 [2024-02-14 19:04:17.197040] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:39.891 [2024-02-14 19:04:17.197054] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:39.891 [2024-02-14 19:04:17.197066] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:39.891 [2024-02-14 19:04:17.197077] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:39.891 19:04:17 -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:39.891 00:03:39.891 00:03:39.891 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.891 http://cunit.sourceforge.net/ 00:03:39.891 00:03:39.891 00:03:39.891 Suite: iscsi_target_node_suite 00:03:39.891 Test: add_lun_test_cases ...[2024-02-14 19:04:17.203754] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1249:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:39.891 [2024-02-14 19:04:17.204607] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:39.891 [2024-02-14 19:04:17.204644] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:39.891 [2024-02-14 19:04:17.204661] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:39.891 [2024-02-14 19:04:17.204676] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:39.891 passed 00:03:39.891 Test: allow_any_allowed ...passed 00:03:39.891 Test: allow_ipv6_allowed ...passed 00:03:39.891 Test: allow_ipv6_denied ...passed 00:03:39.891 Test: allow_ipv6_invalid ...passed 00:03:39.891 Test: allow_ipv4_allowed ...passed 00:03:39.891 Test: allow_ipv4_denied ...passed 00:03:39.891 Test: allow_ipv4_invalid ...passed 00:03:39.891 Test: node_access_allowed ...passed 00:03:39.891 Test: node_access_denied_by_empty_netmask ...passed 00:03:39.891 Test: node_access_multi_initiator_groups_cases ...passed 00:03:39.891 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:39.891 Test: chap_param_test_cases ...[2024-02-14 19:04:17.205525] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:39.891 [2024-02-14 19:04:17.205603] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:39.891 [2024-02-14 19:04:17.205649] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:39.891 [2024-02-14 19:04:17.205684] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:39.891 passed 00:03:39.891 00:03:39.891 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.891 suites 1 1 n/a 0 0 00:03:39.891 tests 13 13 13 0 0 00:03:39.891 asserts 50 50 50 0 n/a 00:03:39.891 00:03:39.891 Elapsed time = 0.000 seconds 00:03:39.891 [2024-02-14 19:04:17.205750] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:39.891 19:04:17 -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:39.891 00:03:39.891 00:03:39.891 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.891 http://cunit.sourceforge.net/ 00:03:39.891 00:03:39.891 00:03:39.891 Suite: iscsi_suite 00:03:39.891 Test: op_login_check_target_test ...[2024-02-14 19:04:17.215447] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:03:39.891 passed 00:03:39.891 Test: op_login_session_normal_test ...[2024-02-14 19:04:17.215833] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:39.891 [2024-02-14 19:04:17.215860] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:39.891 [2024-02-14 19:04:17.215881] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:39.891 [2024-02-14 19:04:17.215935] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:39.892 [2024-02-14 19:04:17.215956] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:39.892 [2024-02-14 19:04:17.215997] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:39.892 [2024-02-14 19:04:17.216035] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:39.892 passed 00:03:39.892 Test: maxburstlength_test ...passed 00:03:39.892 Test: underflow_for_read_transfer_test ...passed 00:03:39.892 Test: underflow_for_zero_read_transfer_test ...passed 00:03:39.892 Test: underflow_for_request_sense_test ...passed 00:03:39.892 Test: underflow_for_check_condition_test ...passed 00:03:39.892 Test: add_transfer_task_test ...passed 00:03:39.892 Test: get_transfer_task_test ...passed 00:03:39.892 Test: del_transfer_task_test ...passed 00:03:39.892 Test: clear_all_transfer_tasks_test ...[2024-02-14 19:04:17.216123] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:39.892 [2024-02-14 19:04:17.216151] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4551:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:39.892 passed 00:03:39.892 Test: build_iovs_test ...passed 00:03:39.892 Test: build_iovs_with_md_test ...passed 00:03:39.892 Test: pdu_hdr_op_login_test ...[2024-02-14 19:04:17.216414] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:39.892 [2024-02-14 19:04:17.216438] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:39.892 [2024-02-14 19:04:17.216458] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:39.892 passed 00:03:39.892 Test: pdu_hdr_op_text_test ...passed 00:03:39.892 Test: pdu_hdr_op_logout_test ...passed 00:03:39.892 Test: pdu_hdr_op_scsi_test ...[2024-02-14 19:04:17.216483] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2241:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:39.892 [2024-02-14 19:04:17.216501] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:39.892 [2024-02-14 19:04:17.216520] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2286:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:39.892 [2024-02-14 19:04:17.216543] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2517:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:39.892 [2024-02-14 19:04:17.216569] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:39.892 [2024-02-14 19:04:17.216586] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:39.892 [2024-02-14 19:04:17.216603] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:39.892 [2024-02-14 19:04:17.216630] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3398:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:39.892 [2024-02-14 19:04:17.216650] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3405:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:39.892 passed 00:03:39.892 Test: pdu_hdr_op_task_mgmt_test ...passed 00:03:39.892 Test: pdu_hdr_op_nopout_test ...[2024-02-14 19:04:17.216670] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:39.892 [2024-02-14 19:04:17.216693] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:39.892 [2024-02-14 19:04:17.216720] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:39.892 [2024-02-14 19:04:17.216746] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:39.892 [2024-02-14 19:04:17.216764] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:39.892 [2024-02-14 19:04:17.216781] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:39.892 [2024-02-14 19:04:17.216797] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:39.892 passed 00:03:39.892 Test: pdu_hdr_op_data_test ...passed 00:03:39.892 Test: empty_text_with_cbit_test ...passed 00:03:39.892 Test: pdu_payload_read_test ...[2024-02-14 19:04:17.216819] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:39.892 [2024-02-14 19:04:17.216837] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:39.892 [2024-02-14 19:04:17.216855] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:39.892 [2024-02-14 19:04:17.216873] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:39.892 [2024-02-14 19:04:17.216892] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:39.892 [2024-02-14 19:04:17.216909] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:39.892 [2024-02-14 19:04:17.216927] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4245:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:39.892 [2024-02-14 19:04:17.217543] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4632:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:39.892 passed 00:03:39.892 Test: data_out_pdu_sequence_test ...passed 00:03:39.892 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:39.892 00:03:39.892 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.892 suites 1 1 n/a 0 0 00:03:39.892 tests 24 24 24 0 0 00:03:39.892 asserts 150253 150253 150253 0 n/a 00:03:39.892 00:03:39.892 Elapsed time = 0.008 seconds 00:03:39.892 19:04:17 -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:39.892 00:03:39.892 00:03:39.892 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.892 http://cunit.sourceforge.net/ 00:03:39.892 00:03:39.892 00:03:39.892 Suite: init_grp_suite 00:03:39.892 Test: create_initiator_group_success_case ...passed 00:03:39.892 Test: find_initiator_group_success_case ...passed 00:03:39.892 Test: register_initiator_group_twice_case ...passed 00:03:39.892 Test: add_initiator_name_success_case ...passed 00:03:39.892 Test: add_initiator_name_fail_case ...[2024-02-14 19:04:17.227366] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:39.892 passed 00:03:39.892 Test: delete_all_initiator_names_success_case ...passed 00:03:39.892 Test: add_netmask_success_case ...passed 00:03:39.892 Test: add_netmask_fail_case ...passed 00:03:39.892 Test: delete_all_netmasks_success_case ...passed 00:03:39.892 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:39.892 Test: netmask_overwrite_all_to_any_case ...passed 00:03:39.892 Test: add_delete_initiator_names_case ...passed 00:03:39.892 Test: add_duplicated_initiator_names_case ...passed 00:03:39.892 Test: delete_nonexisting_initiator_names_case ...passed 00:03:39.892 Test: add_delete_netmasks_case ...passed 00:03:39.892 Test: add_duplicated_netmasks_case ...passed 00:03:39.892 Test: delete_nonexisting_netmasks_case ...passed 00:03:39.892 00:03:39.892 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.892 suites 1 1 n/a 0 0 00:03:39.892 tests 17 17 17 0 0 00:03:39.892 asserts 108 108 108 0 n/a 00:03:39.892 00:03:39.892 Elapsed time = 0.000 seconds 00:03:39.892 [2024-02-14 19:04:17.227739] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:39.892 19:04:17 -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:39.892 00:03:39.892 00:03:39.892 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.892 http://cunit.sourceforge.net/ 00:03:39.892 00:03:39.892 00:03:39.892 Suite: portal_grp_suite 00:03:39.892 Test: portal_create_ipv4_normal_case ...passed 00:03:39.892 Test: portal_create_ipv6_normal_case ...passed 00:03:39.892 Test: portal_create_ipv4_wildcard_case ...passed 00:03:39.892 Test: portal_create_ipv6_wildcard_case ...passed 00:03:39.892 Test: portal_create_twice_case ...[2024-02-14 19:04:17.235316] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:39.892 passed 00:03:39.892 Test: portal_grp_register_unregister_case ...passed 00:03:39.892 Test: portal_grp_register_twice_case ...passed 00:03:39.892 Test: portal_grp_add_delete_case ...passed 00:03:39.892 Test: portal_grp_add_delete_twice_case ...passed 00:03:39.892 00:03:39.892 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.892 suites 1 1 n/a 0 0 00:03:39.892 tests 9 9 9 0 0 00:03:39.892 asserts 44 44 44 0 n/a 00:03:39.892 00:03:39.892 Elapsed time = 0.000 seconds 00:03:39.892 00:03:39.892 real 0m0.049s 00:03:39.892 user 0m0.036s 00:03:39.892 sys 0m0.025s 00:03:39.892 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.892 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.892 ************************************ 00:03:39.892 END TEST unittest_iscsi 00:03:39.892 ************************************ 00:03:39.892 19:04:17 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:03:39.892 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:39.892 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:39.892 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.892 ************************************ 00:03:39.892 START TEST unittest_json 00:03:39.892 ************************************ 00:03:39.892 19:04:17 -- common/autotest_common.sh@1102 -- # unittest_json 00:03:39.892 19:04:17 -- unit/unittest.sh@75 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:39.892 00:03:39.892 00:03:39.892 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.892 http://cunit.sourceforge.net/ 00:03:39.892 00:03:39.892 00:03:39.892 Suite: json 00:03:39.892 Test: test_parse_literal ...passed 00:03:39.893 Test: test_parse_string_simple ...passed 00:03:39.893 Test: test_parse_string_control_chars ...passed 00:03:39.893 Test: test_parse_string_utf8 ...passed 00:03:39.893 Test: test_parse_string_escapes_twochar ...passed 00:03:39.893 Test: test_parse_string_escapes_unicode ...passed 00:03:39.893 Test: test_parse_number ...passed 00:03:39.893 Test: test_parse_array ...passed 00:03:39.893 Test: test_parse_object ...passed 00:03:39.893 Test: test_parse_nesting ...passed 00:03:39.893 Test: test_parse_comment ...passed 00:03:39.893 00:03:39.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.893 suites 1 1 n/a 0 0 00:03:39.893 tests 11 11 11 0 0 00:03:39.893 asserts 1516 1516 1516 0 n/a 00:03:39.893 00:03:39.893 Elapsed time = 0.000 seconds 00:03:39.893 19:04:17 -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:39.893 00:03:39.893 00:03:39.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.893 http://cunit.sourceforge.net/ 00:03:39.893 00:03:39.893 00:03:39.893 Suite: json 00:03:39.893 Test: test_strequal ...passed 00:03:39.893 Test: test_num_to_uint16 ...passed 00:03:39.893 Test: test_num_to_int32 ...passed 00:03:39.893 Test: test_num_to_uint64 ...passed 00:03:39.893 Test: test_decode_object ...passed 00:03:39.893 Test: test_decode_array ...passed 00:03:39.893 Test: test_decode_bool ...passed 00:03:39.893 Test: test_decode_uint16 ...passed 00:03:39.893 Test: test_decode_int32 ...passed 00:03:39.893 Test: test_decode_uint32 ...passed 00:03:39.893 Test: test_decode_uint64 ...passed 00:03:39.893 Test: test_decode_string ...passed 00:03:39.893 Test: test_decode_uuid ...passed 00:03:39.893 Test: test_find ...passed 00:03:39.893 Test: test_find_array ...passed 00:03:39.893 Test: test_iterating ...passed 00:03:39.893 Test: test_free_object ...passed 00:03:39.893 00:03:39.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.893 suites 1 1 n/a 0 0 00:03:39.893 tests 17 17 17 0 0 00:03:39.893 asserts 236 236 236 0 n/a 00:03:39.893 00:03:39.893 Elapsed time = 0.000 seconds 00:03:39.893 19:04:17 -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:39.893 00:03:39.893 00:03:39.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.893 http://cunit.sourceforge.net/ 00:03:39.893 00:03:39.893 00:03:39.893 Suite: json 00:03:39.893 Test: test_write_literal ...passed 00:03:39.893 Test: test_write_string_simple ...passed 00:03:39.893 Test: test_write_string_escapes ...passed 00:03:39.893 Test: test_write_string_utf16le ...passed 00:03:39.893 Test: test_write_number_int32 ...passed 00:03:39.893 Test: test_write_number_uint32 ...passed 00:03:39.893 Test: test_write_number_uint128 ...passed 00:03:39.893 Test: test_write_string_number_uint128 ...passed 00:03:39.893 Test: test_write_number_int64 ...passed 00:03:39.893 Test: test_write_number_uint64 ...passed 00:03:39.893 Test: test_write_number_double ...passed 00:03:39.893 Test: test_write_uuid ...passed 00:03:39.893 Test: test_write_array ...passed 00:03:39.893 Test: test_write_object ...passed 00:03:39.893 Test: test_write_nesting ...passed 00:03:39.893 Test: test_write_val ...passed 00:03:39.893 00:03:39.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.893 suites 1 1 n/a 0 0 00:03:39.893 tests 16 16 16 0 0 00:03:39.893 asserts 918 918 918 0 n/a 00:03:39.893 00:03:39.893 Elapsed time = 0.000 seconds 00:03:39.893 19:04:17 -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:39.893 00:03:39.893 00:03:39.893 CUnit - A unit testing framework for C - Version 2.1-3 00:03:39.893 http://cunit.sourceforge.net/ 00:03:39.893 00:03:39.893 00:03:39.893 Suite: jsonrpc 00:03:39.893 Test: test_parse_request ...passed 00:03:39.893 Test: test_parse_request_streaming ...passed 00:03:39.893 00:03:39.893 Run Summary: Type Total Ran Passed Failed Inactive 00:03:39.893 suites 1 1 n/a 0 0 00:03:39.893 tests 2 2 2 0 0 00:03:39.893 asserts 289 289 289 0 n/a 00:03:39.893 00:03:39.893 Elapsed time = 0.000 seconds 00:03:39.893 00:03:39.893 real 0m0.027s 00:03:39.893 user 0m0.008s 00:03:39.893 sys 0m0.020s 00:03:39.893 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.893 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:39.893 ************************************ 00:03:39.893 END TEST unittest_json 00:03:39.893 ************************************ 00:03:40.152 19:04:17 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:03:40.152 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:40.152 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:40.152 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.152 ************************************ 00:03:40.152 START TEST unittest_rpc 00:03:40.152 ************************************ 00:03:40.152 19:04:17 -- common/autotest_common.sh@1102 -- # unittest_rpc 00:03:40.152 19:04:17 -- unit/unittest.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:40.152 00:03:40.152 00:03:40.152 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.152 http://cunit.sourceforge.net/ 00:03:40.152 00:03:40.152 00:03:40.152 Suite: rpc 00:03:40.152 Test: test_jsonrpc_handler ...passed 00:03:40.152 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:40.152 Test: test_rpc_get_methods ...[2024-02-14 19:04:17.335786] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:40.152 passed 00:03:40.152 Test: test_rpc_spdk_get_version ...passed 00:03:40.152 Test: test_spdk_rpc_listen_close ...passed 00:03:40.152 Test: test_rpc_run_multiple_servers ...passed 00:03:40.152 00:03:40.152 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.152 suites 1 1 n/a 0 0 00:03:40.152 tests 6 6 6 0 0 00:03:40.152 asserts 23 23 23 0 n/a 00:03:40.152 00:03:40.152 Elapsed time = 0.000 seconds 00:03:40.152 00:03:40.152 real 0m0.006s 00:03:40.152 user 0m0.000s 00:03:40.152 sys 0m0.010s 00:03:40.152 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.152 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.152 ************************************ 00:03:40.152 END TEST unittest_rpc 00:03:40.152 ************************************ 00:03:40.152 19:04:17 -- unit/unittest.sh@245 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:40.152 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:40.152 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:40.152 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.152 ************************************ 00:03:40.152 START TEST unittest_notify 00:03:40.152 ************************************ 00:03:40.152 19:04:17 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:40.152 00:03:40.152 00:03:40.152 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.152 http://cunit.sourceforge.net/ 00:03:40.152 00:03:40.152 00:03:40.152 Suite: app_suite 00:03:40.152 Test: notify ...passed 00:03:40.152 00:03:40.152 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.152 suites 1 1 n/a 0 0 00:03:40.152 tests 1 1 1 0 0 00:03:40.152 asserts 13 13 13 0 n/a 00:03:40.152 00:03:40.152 Elapsed time = 0.000 seconds 00:03:40.152 00:03:40.152 real 0m0.005s 00:03:40.152 user 0m0.004s 00:03:40.153 sys 0m0.004s 00:03:40.153 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.153 ************************************ 00:03:40.153 END TEST unittest_notify 00:03:40.153 ************************************ 00:03:40.153 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.153 19:04:17 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:03:40.153 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:40.153 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:40.153 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.153 ************************************ 00:03:40.153 START TEST unittest_nvme 00:03:40.153 ************************************ 00:03:40.153 19:04:17 -- common/autotest_common.sh@1102 -- # unittest_nvme 00:03:40.153 19:04:17 -- unit/unittest.sh@86 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:40.153 00:03:40.153 00:03:40.153 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.153 http://cunit.sourceforge.net/ 00:03:40.153 00:03:40.153 00:03:40.153 Suite: nvme 00:03:40.153 Test: test_opc_data_transfer ...passed 00:03:40.153 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:40.153 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:40.153 Test: test_trid_parse_and_compare ...[2024-02-14 19:04:17.420482] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:40.153 [2024-02-14 19:04:17.420761] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:40.153 [2024-02-14 19:04:17.420781] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1180:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:40.153 [2024-02-14 19:04:17.420795] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:40.153 [2024-02-14 19:04:17.420807] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:03:40.153 [2024-02-14 19:04:17.420819] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:40.153 passed 00:03:40.153 Test: test_trid_trtype_str ...passed 00:03:40.153 Test: test_trid_adrfam_str ...passed 00:03:40.153 Test: test_nvme_ctrlr_probe ...[2024-02-14 19:04:17.420932] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:40.153 passed 00:03:40.153 Test: test_spdk_nvme_probe ...passed 00:03:40.153 Test: test_spdk_nvme_connect ...[2024-02-14 19:04:17.420960] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:40.153 [2024-02-14 19:04:17.420973] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:40.153 [2024-02-14 19:04:17.420986] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:40.153 [2024-02-14 19:04:17.420998] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:40.153 [2024-02-14 19:04:17.421020] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:40.153 passed 00:03:40.153 Test: test_nvme_ctrlr_probe_internal ...passed 00:03:40.153 Test: test_nvme_init_controllers ...[2024-02-14 19:04:17.421076] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:40.153 [2024-02-14 19:04:17.421089] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:03:40.153 [2024-02-14 19:04:17.421113] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:40.153 [2024-02-14 19:04:17.421124] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:40.153 [2024-02-14 19:04:17.421140] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:40.153 passed 00:03:40.153 Test: test_nvme_driver_init ...[2024-02-14 19:04:17.421159] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:40.153 [2024-02-14 19:04:17.421177] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:40.153 passed 00:03:40.153 Test: test_spdk_nvme_detach ...passed 00:03:40.153 Test: test_nvme_completion_poll_cb ...passed 00:03:40.153 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:40.153 Test: test_nvme_allocate_request_null ...[2024-02-14 19:04:17.530563] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:40.153 passed 00:03:40.153 Test: test_nvme_allocate_request ...passed 00:03:40.153 Test: test_nvme_free_request ...passed 00:03:40.153 Test: test_nvme_allocate_request_user_copy ...passed 00:03:40.153 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:40.153 Test: test_nvme_request_check_timeout ...passed 00:03:40.153 Test: test_nvme_wait_for_completion ...passed 00:03:40.153 Test: test_spdk_nvme_parse_func ...passed 00:03:40.153 Test: test_spdk_nvme_detach_async ...passed 00:03:40.153 Test: test_nvme_parse_addr ...passed 00:03:40.153 00:03:40.153 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.153 suites 1 1 n/a 0 0 00:03:40.153 tests 25 25 25 0 0 00:03:40.153 asserts 326 326 326 0 n/a 00:03:40.153 00:03:40.153 Elapsed time = 0.000 seconds 00:03:40.153 [2024-02-14 19:04:17.530852] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:40.153 19:04:17 -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:40.153 00:03:40.153 00:03:40.153 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.153 http://cunit.sourceforge.net/ 00:03:40.153 00:03:40.153 00:03:40.153 Suite: nvme_ctrlr 00:03:40.153 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-02-14 19:04:17.540361] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 passed 00:03:40.153 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-02-14 19:04:17.542006] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 passed 00:03:40.153 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-02-14 19:04:17.543210] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 passed 00:03:40.153 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-02-14 19:04:17.544401] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 passed 00:03:40.153 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-02-14 19:04:17.545601] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 [2024-02-14 19:04:17.546733] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-14 19:04:17.547838] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-14 19:04:17.548945] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:40.153 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-02-14 19:04:17.551155] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 [2024-02-14 19:04:17.553336] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-14 19:04:17.554437] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:40.153 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-02-14 19:04:17.556671] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 [2024-02-14 19:04:17.557790] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-14 19:04:17.560007] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:40.153 Test: test_nvme_ctrlr_init_delay ...[2024-02-14 19:04:17.562268] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 passed 00:03:40.153 Test: test_alloc_io_qpair_rr_1 ...[2024-02-14 19:04:17.563426] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 [2024-02-14 19:04:17.563470] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5306:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:40.153 [2024-02-14 19:04:17.563487] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:40.153 [2024-02-14 19:04:17.563500] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:40.153 [2024-02-14 19:04:17.563511] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:40.153 passed 00:03:40.153 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:40.153 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:40.153 Test: test_alloc_io_qpair_wrr_1 ...passed 00:03:40.153 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:40.153 Test: test_spdk_nvme_ctrlr_update_firmware ...passed 00:03:40.153 Test: test_nvme_ctrlr_fail ...passed 00:03:40.153 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:40.153 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:40.153 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:40.153 Test: test_nvme_ctrlr_test_active_ns ...[2024-02-14 19:04:17.563581] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 [2024-02-14 19:04:17.563608] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.153 [2024-02-14 19:04:17.563625] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5306:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:40.153 [2024-02-14 19:04:17.563662] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4834:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:40.153 [2024-02-14 19:04:17.563676] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4871:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:40.154 [2024-02-14 19:04:17.563688] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4911:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:40.154 [2024-02-14 19:04:17.563701] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4871:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:40.154 [2024-02-14 19:04:17.563715] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:40.154 [2024-02-14 19:04:17.563771] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.413 passed 00:03:40.413 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:40.413 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:40.413 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:40.413 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-02-14 19:04:17.603905] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.413 passed 00:03:40.413 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-02-14 19:04:17.610473] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.413 passed 00:03:40.413 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-02-14 19:04:17.611601] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.413 [2024-02-14 19:04:17.611622] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:40.413 passed 00:03:40.413 Test: test_alloc_io_qpair_fail ...[2024-02-14 19:04:17.612745] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 [2024-02-14 19:04:17.612768] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:40.414 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:40.414 Test: test_nvme_ctrlr_set_state ...passed 00:03:40.414 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-02-14 19:04:17.612806] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:40.414 [2024-02-14 19:04:17.612824] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-02-14 19:04:17.617859] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_ns_mgmt ...[2024-02-14 19:04:17.627789] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_reset ...[2024-02-14 19:04:17.629034] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_aer_callback ...[2024-02-14 19:04:17.629246] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-02-14 19:04:17.630512] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:40.414 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:40.414 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-02-14 19:04:17.632024] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:40.414 Test: test_nvme_ctrlr_ana_resize ...[2024-02-14 19:04:17.633293] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:40.414 Test: test_nvme_transport_ctrlr_ready ...passed 00:03:40.414 Test: test_nvme_ctrlr_disable ...[2024-02-14 19:04:17.634586] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:40.414 [2024-02-14 19:04:17.634650] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:03:40.414 [2024-02-14 19:04:17.634699] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:40.414 passed 00:03:40.414 00:03:40.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.414 suites 1 1 n/a 0 0 00:03:40.414 tests 43 43 43 0 0 00:03:40.414 asserts 10418 10418 10418 0 n/a 00:03:40.414 00:03:40.414 Elapsed time = 0.047 seconds 00:03:40.414 19:04:17 -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:40.414 00:03:40.414 00:03:40.414 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.414 http://cunit.sourceforge.net/ 00:03:40.414 00:03:40.414 00:03:40.414 Suite: nvme_ctrlr_cmd 00:03:40.414 Test: test_get_log_pages ...passed 00:03:40.414 Test: test_set_feature_cmd ...passed 00:03:40.414 Test: test_set_feature_ns_cmd ...passed 00:03:40.414 Test: test_get_feature_cmd ...passed 00:03:40.414 Test: test_get_feature_ns_cmd ...passed 00:03:40.414 Test: test_abort_cmd ...passed 00:03:40.414 Test: test_set_host_id_cmds ...[2024-02-14 19:04:17.648748] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:40.414 passed 00:03:40.414 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:40.414 Test: test_io_raw_cmd ...passed 00:03:40.414 Test: test_io_raw_cmd_with_md ...passed 00:03:40.414 Test: test_namespace_attach ...passed 00:03:40.414 Test: test_namespace_detach ...passed 00:03:40.414 Test: test_namespace_create ...passed 00:03:40.414 Test: test_namespace_delete ...passed 00:03:40.414 Test: test_doorbell_buffer_config ...passed 00:03:40.414 Test: test_format_nvme ...passed 00:03:40.414 Test: test_fw_commit ...passed 00:03:40.414 Test: test_fw_image_download ...passed 00:03:40.414 Test: test_sanitize ...passed 00:03:40.414 Test: test_directive ...passed 00:03:40.414 Test: test_nvme_request_add_abort ...passed 00:03:40.414 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:40.414 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:40.414 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:40.414 00:03:40.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.414 suites 1 1 n/a 0 0 00:03:40.414 tests 24 24 24 0 0 00:03:40.414 asserts 198 198 198 0 n/a 00:03:40.414 00:03:40.414 Elapsed time = 0.000 seconds 00:03:40.414 19:04:17 -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:40.414 00:03:40.414 00:03:40.414 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.414 http://cunit.sourceforge.net/ 00:03:40.414 00:03:40.414 00:03:40.414 Suite: nvme_ctrlr_cmd 00:03:40.414 Test: test_geometry_cmd ...passed 00:03:40.414 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:40.414 00:03:40.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.414 suites 1 1 n/a 0 0 00:03:40.414 tests 2 2 2 0 0 00:03:40.414 asserts 7 7 7 0 n/a 00:03:40.414 00:03:40.414 Elapsed time = 0.000 seconds 00:03:40.414 19:04:17 -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:40.414 00:03:40.414 00:03:40.414 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.414 http://cunit.sourceforge.net/ 00:03:40.414 00:03:40.414 00:03:40.414 Suite: nvme 00:03:40.414 Test: test_nvme_ns_construct ...passed 00:03:40.414 Test: test_nvme_ns_uuid ...passed 00:03:40.414 Test: test_nvme_ns_csi ...passed 00:03:40.414 Test: test_nvme_ns_data ...passed 00:03:40.414 Test: test_nvme_ns_set_identify_data ...passed 00:03:40.414 Test: test_spdk_nvme_ns_get_values ...passed 00:03:40.414 Test: test_spdk_nvme_ns_is_active ...passed 00:03:40.414 Test: spdk_nvme_ns_supports ...passed 00:03:40.414 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:40.414 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:40.414 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:40.414 Test: test_nvme_ns_find_id_desc ...passed 00:03:40.414 00:03:40.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.414 suites 1 1 n/a 0 0 00:03:40.414 tests 12 12 12 0 0 00:03:40.414 asserts 83 83 83 0 n/a 00:03:40.414 00:03:40.414 Elapsed time = 0.000 seconds 00:03:40.414 19:04:17 -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:40.414 00:03:40.414 00:03:40.414 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.414 http://cunit.sourceforge.net/ 00:03:40.414 00:03:40.414 00:03:40.414 Suite: nvme_ns_cmd 00:03:40.414 Test: split_test ...passed 00:03:40.414 Test: split_test2 ...passed 00:03:40.414 Test: split_test3 ...passed 00:03:40.414 Test: split_test4 ...passed 00:03:40.414 Test: test_nvme_ns_cmd_flush ...passed 00:03:40.414 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:40.414 Test: test_nvme_ns_cmd_copy ...passed 00:03:40.414 Test: test_io_flags ...[2024-02-14 19:04:17.667509] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:40.414 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:40.414 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:40.414 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:40.414 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:40.414 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:40.414 Test: test_cmd_child_request ...passed 00:03:40.414 Test: test_nvme_ns_cmd_readv ...passed 00:03:40.414 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:40.414 Test: test_nvme_ns_cmd_writev ...passed 00:03:40.414 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:40.414 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:40.414 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:40.414 Test: test_nvme_ns_cmd_comparev ...passed 00:03:40.414 Test: test_nvme_ns_cmd_compare_and_write ...[2024-02-14 19:04:17.667913] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 288:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:40.414 passed 00:03:40.414 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:40.414 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:40.414 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:40.414 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:40.414 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:40.414 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:03:40.414 Test: test_nvme_ns_cmd_verify ...passed 00:03:40.414 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:40.414 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed[2024-02-14 19:04:17.668045] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:40.414 [2024-02-14 19:04:17.668073] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:40.414 00:03:40.414 00:03:40.415 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.415 suites 1 1 n/a 0 0 00:03:40.415 tests 32 32 32 0 0 00:03:40.415 asserts 550 550 550 0 n/a 00:03:40.415 00:03:40.415 Elapsed time = 0.000 seconds 00:03:40.415 19:04:17 -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:40.415 00:03:40.415 00:03:40.415 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.415 http://cunit.sourceforge.net/ 00:03:40.415 00:03:40.415 00:03:40.415 Suite: nvme_ns_cmd 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:40.415 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:40.415 00:03:40.415 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.415 suites 1 1 n/a 0 0 00:03:40.415 tests 12 12 12 0 0 00:03:40.415 asserts 123 123 123 0 n/a 00:03:40.415 00:03:40.415 Elapsed time = 0.000 seconds 00:03:40.415 19:04:17 -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:40.415 00:03:40.415 00:03:40.415 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.415 http://cunit.sourceforge.net/ 00:03:40.415 00:03:40.415 00:03:40.415 Suite: nvme_qpair 00:03:40.415 Test: test3 ...passed 00:03:40.415 Test: test_ctrlr_failed ...passed 00:03:40.415 Test: struct_packing ...passed 00:03:40.415 Test: test_nvme_qpair_process_completions ...[2024-02-14 19:04:17.682315] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:40.415 passed 00:03:40.415 Test: test_nvme_completion_is_retry ...passed 00:03:40.415 Test: test_get_status_string ...passed 00:03:40.415 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:40.415 Test: test_nvme_qpair_submit_request ...passed 00:03:40.415 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:40.415 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:40.415 Test: test_nvme_qpair_init_deinit ...[2024-02-14 19:04:17.682582] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:40.415 [2024-02-14 19:04:17.682651] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:40.415 [2024-02-14 19:04:17.682667] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:40.415 passed 00:03:40.415 Test: test_nvme_get_sgl_print_info ...passed 00:03:40.415 00:03:40.415 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.415 suites 1 1 n/a 0 0 00:03:40.415 tests 12 12 12 0 0 00:03:40.415 asserts 154 154 154 0 n/a 00:03:40.415 00:03:40.415 Elapsed time = 0.000 seconds[2024-02-14 19:04:17.682754] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:40.415 00:03:40.415 19:04:17 -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:40.415 00:03:40.415 00:03:40.415 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.415 http://cunit.sourceforge.net/ 00:03:40.415 00:03:40.415 00:03:40.415 Suite: nvme_pcie 00:03:40.415 Test: test_prp_list_append ...[2024-02-14 19:04:17.688855] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:40.415 passed 00:03:40.415 Test: test_nvme_pcie_hotplug_monitor ...passed 00:03:40.415 Test: test_shadow_doorbell_update ...passed 00:03:40.415 Test: test_build_contig_hw_sgl_request ...passed 00:03:40.415 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:40.415 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:40.415 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:40.415 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:40.415 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:40.415 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:40.415 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:40.415 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-02-14 19:04:17.689112] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:40.415 [2024-02-14 19:04:17.689141] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:40.415 [2024-02-14 19:04:17.689201] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:40.415 [2024-02-14 19:04:17.689233] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:40.415 [2024-02-14 19:04:17.689326] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:40.415 [2024-02-14 19:04:17.689359] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:40.415 [2024-02-14 19:04:17.689387] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:40.415 passed 00:03:40.415 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:40.415 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:03:40.415 00:03:40.415 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.415 suites 1 1 n/a 0 0 00:03:40.415 tests 14 14 14 0 0 00:03:40.415 asserts 235 235 235 0 n/a 00:03:40.415 00:03:40.415 Elapsed time = 0.000 seconds 00:03:40.415 [2024-02-14 19:04:17.689411] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:40.415 [2024-02-14 19:04:17.689433] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:40.415 19:04:17 -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:40.415 00:03:40.415 00:03:40.415 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.415 http://cunit.sourceforge.net/ 00:03:40.415 00:03:40.415 00:03:40.415 Suite: nvme_ns_cmd 00:03:40.415 Test: nvme_poll_group_create_test ...passed 00:03:40.415 Test: nvme_poll_group_add_remove_test ...passed 00:03:40.415 Test: nvme_poll_group_process_completions ...passed 00:03:40.415 Test: nvme_poll_group_destroy_test ...passed 00:03:40.415 Test: nvme_poll_group_get_free_stats ...passed 00:03:40.415 00:03:40.415 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.415 suites 1 1 n/a 0 0 00:03:40.415 tests 5 5 5 0 0 00:03:40.415 asserts 75 75 75 0 n/a 00:03:40.415 00:03:40.415 Elapsed time = 0.000 seconds 00:03:40.415 19:04:17 -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:40.415 00:03:40.415 00:03:40.415 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.415 http://cunit.sourceforge.net/ 00:03:40.415 00:03:40.415 00:03:40.415 Suite: nvme_quirks 00:03:40.415 Test: test_nvme_quirks_striping ...passed 00:03:40.415 00:03:40.415 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.415 suites 1 1 n/a 0 0 00:03:40.415 tests 1 1 1 0 0 00:03:40.415 asserts 5 5 5 0 n/a 00:03:40.415 00:03:40.415 Elapsed time = 0.000 seconds 00:03:40.415 19:04:17 -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:40.415 00:03:40.415 00:03:40.415 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.415 http://cunit.sourceforge.net/ 00:03:40.415 00:03:40.415 00:03:40.415 Suite: nvme_tcp 00:03:40.415 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:40.415 Test: test_nvme_tcp_build_iovs ...passed 00:03:40.415 Test: test_nvme_tcp_build_sgl_request ...[2024-02-14 19:04:17.706936] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 782:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820b79c50, and the iovcnt=16, remaining_size=28672 00:03:40.415 passed 00:03:40.415 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:40.415 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:40.415 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:40.415 Test: test_nvme_tcp_req_get ...passed 00:03:40.415 Test: test_nvme_tcp_req_init ...passed 00:03:40.415 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:40.415 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:40.415 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:40.415 Test: test_nvme_tcp_alloc_reqs ...[2024-02-14 19:04:17.707791] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7b7c0 is same with the state(6) to be set 00:03:40.415 passed 00:03:40.415 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:03:40.415 Test: test_nvme_tcp_pdu_ch_handle ...[2024-02-14 19:04:17.707836] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7ab10 is same with the state(5) to be set 00:03:40.415 [2024-02-14 19:04:17.707853] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1106:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820b7b0b8 00:03:40.415 [2024-02-14 19:04:17.707866] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1166:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:40.416 [2024-02-14 19:04:17.707877] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.707889] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1116:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:40.416 [2024-02-14 19:04:17.707899] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.708034] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:40.416 [2024-02-14 19:04:17.708043] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.708051] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.708325] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.708343] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.708352] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 passed 00:03:40.416 Test: test_nvme_tcp_qpair_connect_sock ...[2024-02-14 19:04:17.708360] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7af48 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.708391] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2237:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:40.416 [2024-02-14 19:04:17.708401] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2249:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:40.416 passed 00:03:40.416 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:40.416 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:03:40.416 Test: test_nvme_tcp_icresp_handle ...[2024-02-14 19:04:17.789812] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2249:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:40.416 [2024-02-14 19:04:17.789920] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1281:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820b7b4f0): PDU Sequence Error 00:03:40.416 [2024-02-14 19:04:17.789943] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1506:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:40.416 [2024-02-14 19:04:17.789959] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1514:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:40.416 [2024-02-14 19:04:17.789973] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7ab10 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.789986] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1522:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:40.416 [2024-02-14 19:04:17.789998] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7ab10 is same with the state(5) to be set 00:03:40.416 passed 00:03:40.416 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:03:40.416 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:03:40.416 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:03:40.416 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-02-14 19:04:17.790011] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b7ab10 is same with the state(0) to be set 00:03:40.416 [2024-02-14 19:04:17.790043] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1281:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820b7b4f0): PDU Sequence Error 00:03:40.416 [2024-02-14 19:04:17.790070] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820b79db0 00:03:40.416 [2024-02-14 19:04:17.790108] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 351:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820b79538, errno=0, rc=0 00:03:40.416 [2024-02-14 19:04:17.790130] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b79538 is same with the state(5) to be set 00:03:40.416 [2024-02-14 19:04:17.790142] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 321:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820b79538 is same with the state(5) to be set 00:03:40.416 passed 00:03:40.416 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-02-14 19:04:17.790223] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2097:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820b79538 (0): No error: 0 00:03:40.416 [2024-02-14 19:04:17.790237] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2097:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820b79538 (0): No error: 0 00:03:40.675 [2024-02-14 19:04:17.875375] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2421:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:40.675 [2024-02-14 19:04:17.875473] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2421:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:40.675 passed 00:03:40.675 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:40.675 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:03:40.675 Test: test_nvme_tcp_ctrlr_construct ...[2024-02-14 19:04:17.875555] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2847:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:40.675 [2024-02-14 19:04:17.875568] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2847:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:40.675 passed 00:03:40.675 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:40.675 00:03:40.675 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.675 suites 1 1 n/a 0 0 00:03:40.675 tests 27 27 27 0 0 00:03:40.675 asserts 624 624 624 0 n/a 00:03:40.675 00:03:40.675 Elapsed time = 0.086 seconds 00:03:40.675 [2024-02-14 19:04:17.875625] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2421:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:40.675 [2024-02-14 19:04:17.875635] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:40.675 [2024-02-14 19:04:17.875652] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2237:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:40.675 [2024-02-14 19:04:17.875662] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:40.676 [2024-02-14 19:04:17.875678] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82d585180 with addr=192.168.1.78, port=23 00:03:40.676 [2024-02-14 19:04:17.875687] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:40.676 [2024-02-14 19:04:17.875708] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 782:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82d585300, and the iovcnt=1, remaining_size=1024 00:03:40.676 [2024-02-14 19:04:17.875718] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 959:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:40.676 19:04:17 -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:40.676 00:03:40.676 00:03:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.676 http://cunit.sourceforge.net/ 00:03:40.676 00:03:40.676 00:03:40.676 Suite: nvme_transport 00:03:40.676 Test: test_nvme_get_transport ...passed 00:03:40.676 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:40.676 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:40.676 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:40.676 Test: test_ctrlr_get_memory_domains ...passed 00:03:40.676 00:03:40.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.676 suites 1 1 n/a 0 0 00:03:40.676 tests 5 5 5 0 0 00:03:40.676 asserts 28 28 28 0 n/a 00:03:40.676 00:03:40.676 Elapsed time = 0.000 seconds 00:03:40.676 19:04:17 -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:40.676 00:03:40.676 00:03:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.676 http://cunit.sourceforge.net/ 00:03:40.676 00:03:40.676 00:03:40.676 Suite: nvme_io_msg 00:03:40.676 Test: test_nvme_io_msg_send ...passed 00:03:40.676 Test: test_nvme_io_msg_process ...passed 00:03:40.676 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:40.676 00:03:40.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.676 suites 1 1 n/a 0 0 00:03:40.676 tests 3 3 3 0 0 00:03:40.676 asserts 56 56 56 0 n/a 00:03:40.676 00:03:40.676 Elapsed time = 0.000 seconds 00:03:40.676 19:04:17 -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:40.676 00:03:40.676 00:03:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.676 http://cunit.sourceforge.net/ 00:03:40.676 00:03:40.676 00:03:40.676 Suite: nvme_pcie_common 00:03:40.676 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:03:40.676 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-02-14 19:04:17.900544] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:40.676 passed 00:03:40.676 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:40.676 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-02-14 19:04:17.901017] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:40.676 [2024-02-14 19:04:17.901345] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:40.676 passed 00:03:40.676 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-02-14 19:04:17.901359] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:40.676 passed 00:03:40.676 Test: test_nvme_pcie_poll_group_get_stats ...[2024-02-14 19:04:17.901713] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:40.676 [2024-02-14 19:04:17.901727] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:40.676 passed 00:03:40.676 00:03:40.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.676 suites 1 1 n/a 0 0 00:03:40.676 tests 6 6 6 0 0 00:03:40.676 asserts 148 148 148 0 n/a 00:03:40.676 00:03:40.676 Elapsed time = 0.000 seconds 00:03:40.676 19:04:17 -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:40.676 00:03:40.676 00:03:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.676 http://cunit.sourceforge.net/ 00:03:40.676 00:03:40.676 00:03:40.676 Suite: nvme_fabric 00:03:40.676 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:40.676 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:40.676 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:40.676 Test: test_nvme_fabric_discover_probe ...passed 00:03:40.676 Test: test_nvme_fabric_qpair_connect ...passed 00:03:40.676 00:03:40.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.676 suites 1 1 n/a 0 0 00:03:40.676 tests 5 5 5 0 0 00:03:40.676 asserts 60 60 60 0 n/a 00:03:40.676 00:03:40.676 Elapsed time = 0.000 seconds 00:03:40.676 [2024-02-14 19:04:17.907358] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 605:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:40.676 19:04:17 -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:40.676 00:03:40.676 00:03:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.676 http://cunit.sourceforge.net/ 00:03:40.676 00:03:40.676 00:03:40.676 Suite: nvme_opal 00:03:40.676 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:40.676 Test: test_opal_add_short_atom_header ...[2024-02-14 19:04:17.914622] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:40.676 passed 00:03:40.676 00:03:40.676 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.676 suites 1 1 n/a 0 0 00:03:40.676 tests 2 2 2 0 0 00:03:40.676 asserts 22 22 22 0 n/a 00:03:40.676 00:03:40.676 Elapsed time = 0.000 seconds 00:03:40.676 00:03:40.676 real 0m0.501s 00:03:40.676 user 0m0.096s 00:03:40.676 sys 0m0.177s 00:03:40.676 19:04:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:40.676 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.676 ************************************ 00:03:40.676 END TEST unittest_nvme 00:03:40.676 ************************************ 00:03:40.676 19:04:17 -- unit/unittest.sh@247 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:40.676 19:04:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:40.676 19:04:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:40.676 19:04:17 -- common/autotest_common.sh@10 -- # set +x 00:03:40.676 ************************************ 00:03:40.676 START TEST unittest_log 00:03:40.676 ************************************ 00:03:40.676 19:04:17 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:40.676 00:03:40.676 00:03:40.676 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.676 http://cunit.sourceforge.net/ 00:03:40.676 00:03:40.676 00:03:40.676 Suite: log 00:03:40.676 Test: log_test ...[2024-02-14 19:04:17.957589] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:03:40.676 [2024-02-14 19:04:17.957920] log_ut.c: 57:log_test: *DEBUG*: log test 00:03:40.676 log dump test: 00:03:40.676 passed 00:03:40.676 Test: deprecation ...00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:40.676 spdk dump test: 00:03:40.676 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:40.676 spdk dump test: 00:03:40.676 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:40.676 00000010 65 20 63 68 61 72 73 e chars 00:03:41.614 passed 00:03:41.614 00:03:41.614 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.614 suites 1 1 n/a 0 0 00:03:41.614 tests 2 2 2 0 0 00:03:41.614 asserts 73 73 73 0 n/a 00:03:41.614 00:03:41.614 Elapsed time = 0.000 seconds 00:03:41.614 00:03:41.614 real 0m1.049s 00:03:41.614 user 0m0.000s 00:03:41.614 sys 0m0.008s 00:03:41.614 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.614 ************************************ 00:03:41.614 END TEST unittest_log 00:03:41.614 ************************************ 00:03:41.614 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.875 19:04:19 -- unit/unittest.sh@248 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:41.875 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:41.875 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:41.875 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.875 ************************************ 00:03:41.875 START TEST unittest_lvol 00:03:41.875 ************************************ 00:03:41.875 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:41.875 00:03:41.875 00:03:41.875 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.875 http://cunit.sourceforge.net/ 00:03:41.875 00:03:41.875 00:03:41.875 Suite: lvol 00:03:41.875 Test: lvs_init_unload_success ...[2024-02-14 19:04:19.049891] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:41.875 passed 00:03:41.875 Test: lvs_init_destroy_success ...passed 00:03:41.875 Test: lvs_init_opts_success ...passed 00:03:41.875 Test: lvs_unload_lvs_is_null_fail ...passed 00:03:41.875 Test: lvs_names ...passed 00:03:41.875 Test: lvol_create_destroy_success ...passed 00:03:41.875 Test: lvol_create_fail ...passed 00:03:41.875 Test: lvol_destroy_fail ...passed 00:03:41.875 Test: lvol_close ...[2024-02-14 19:04:19.050313] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:41.875 [2024-02-14 19:04:19.050360] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:41.875 [2024-02-14 19:04:19.050386] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:41.875 [2024-02-14 19:04:19.050406] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:41.875 [2024-02-14 19:04:19.050439] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:41.875 [2024-02-14 19:04:19.050524] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:41.875 [2024-02-14 19:04:19.050547] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:41.875 [2024-02-14 19:04:19.050594] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:41.875 [2024-02-14 19:04:19.050630] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:41.875 [2024-02-14 19:04:19.050649] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:41.875 passed 00:03:41.875 Test: lvol_resize ...passed 00:03:41.875 Test: lvol_set_read_only ...passed 00:03:41.875 Test: test_lvs_load ...passed 00:03:41.875 Test: lvols_load ...passed 00:03:41.875 Test: lvol_open ...passed 00:03:41.875 Test: lvol_snapshot ...passed 00:03:41.875 Test: lvol_snapshot_fail ...passed 00:03:41.875 Test: lvol_clone ...passed 00:03:41.875 Test: lvol_clone_fail ...[2024-02-14 19:04:19.050737] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:41.875 [2024-02-14 19:04:19.050755] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:41.875 [2024-02-14 19:04:19.050793] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:41.875 [2024-02-14 19:04:19.050838] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:41.875 [2024-02-14 19:04:19.050969] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:41.875 passed 00:03:41.875 Test: lvol_iter_clones ...passed 00:03:41.875 Test: lvol_refcnt ...passed 00:03:41.875 Test: lvol_names ...passed 00:03:41.875 Test: lvol_create_thin_provisioned ...passed 00:03:41.875 Test: lvol_rename ...passed 00:03:41.875 Test: lvs_rename ...passed 00:03:41.875 Test: lvol_inflate ...passed 00:03:41.875 Test: lvol_decouple_parent ...passed 00:03:41.875 Test: lvol_get_xattr ...passed 00:03:41.875 Test: lvol_esnap_reload ...passed 00:03:41.875 Test: lvol_esnap_create_bad_args ...[2024-02-14 19:04:19.051046] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:41.875 [2024-02-14 19:04:19.051112] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol daf547e1-cb6b-11ee-af6b-4feeebbbadda because it is still open 00:03:41.875 [2024-02-14 19:04:19.051154] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:41.875 [2024-02-14 19:04:19.051178] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:41.875 [2024-02-14 19:04:19.051212] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:41.875 [2024-02-14 19:04:19.051279] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:41.876 [2024-02-14 19:04:19.051306] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:41.876 [2024-02-14 19:04:19.051528] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:41.876 [2024-02-14 19:04:19.051594] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:41.876 [2024-02-14 19:04:19.051640] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:41.876 [2024-02-14 19:04:19.051728] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:41.876 [2024-02-14 19:04:19.051753] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:41.876 [2024-02-14 19:04:19.051779] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:41.876 [2024-02-14 19:04:19.051812] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:41.876 passed 00:03:41.876 Test: lvol_esnap_create_delete ...passed 00:03:41.876 Test: lvol_esnap_load_esnaps ...passed 00:03:41.876 Test: lvol_esnap_missing ...passed 00:03:41.876 Test: lvol_esnap_hotplug ... 00:03:41.876 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:41.876 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:41.876 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:41.876 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:41.876 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:41.876 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:41.876 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:41.876 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:41.876 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:41.876 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:41.876 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:41.876 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:41.876 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:41.876 passed 00:03:41.876 Test: lvol_get_by ...passed 00:03:41.876 00:03:41.876 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.876 suites 1 1 n/a 0 0 00:03:41.876 tests 34 34 34 0 0 00:03:41.876 asserts 1439 1439 1439 0 n/a 00:03:41.876 00:03:41.876 Elapsed time = 0.008 seconds 00:03:41.876 [2024-02-14 19:04:19.051861] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:41.876 [2024-02-14 19:04:19.051917] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:41.876 [2024-02-14 19:04:19.051944] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:41.876 [2024-02-14 19:04:19.051956] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:41.876 [2024-02-14 19:04:19.052024] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol daf56b86-cb6b-11ee-af6b-4feeebbbadda: failed to create esnap bs_dev: error -12 00:03:41.876 [2024-02-14 19:04:19.052075] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol daf56d5f-cb6b-11ee-af6b-4feeebbbadda: failed to create esnap bs_dev: error -12 00:03:41.876 [2024-02-14 19:04:19.052102] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol daf56e96-cb6b-11ee-af6b-4feeebbbadda: failed to create esnap bs_dev: error -12 00:03:41.876 00:03:41.876 real 0m0.012s 00:03:41.876 user 0m0.012s 00:03:41.876 sys 0m0.000s 00:03:41.876 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.876 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.876 ************************************ 00:03:41.876 END TEST unittest_lvol 00:03:41.876 ************************************ 00:03:41.876 19:04:19 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:41.876 19:04:19 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:41.876 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:41.876 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:41.876 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.876 ************************************ 00:03:41.876 START TEST unittest_nvme_rdma 00:03:41.876 ************************************ 00:03:41.876 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:41.876 00:03:41.876 00:03:41.876 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.876 http://cunit.sourceforge.net/ 00:03:41.876 00:03:41.876 00:03:41.876 Suite: nvme_rdma 00:03:41.876 Test: test_nvme_rdma_build_sgl_request ...[2024-02-14 19:04:19.106133] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1452:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:41.876 [2024-02-14 19:04:19.106464] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1626:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:41.876 [2024-02-14 19:04:19.106489] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1682:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:41.876 passed 00:03:41.876 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:41.876 Test: test_nvme_rdma_build_contig_request ...passed 00:03:41.876 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:41.876 Test: test_nvme_rdma_create_reqs ...passed 00:03:41.876 Test: test_nvme_rdma_create_rsps ...passed 00:03:41.876 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-02-14 19:04:19.106523] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1563:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:41.876 [2024-02-14 19:04:19.106556] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1004:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:41.876 [2024-02-14 19:04:19.106615] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 922:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:41.876 [2024-02-14 19:04:19.106648] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1820:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:41.876 passed 00:03:41.876 Test: test_nvme_rdma_poller_create ...passed 00:03:41.876 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:03:41.876 Test: test_nvme_rdma_ctrlr_construct ...[2024-02-14 19:04:19.106663] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1820:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:41.876 [2024-02-14 19:04:19.106701] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 523:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:41.876 passed 00:03:41.876 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:41.876 Test: test_nvme_rdma_req_init ...passed 00:03:41.876 Test: test_nvme_rdma_validate_cm_event ...passed 00:03:41.876 Test: test_nvme_rdma_qpair_init ...passed 00:03:41.876 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:41.876 Test: test_nvme_rdma_memory_domain ...[2024-02-14 19:04:19.106791] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:41.876 [2024-02-14 19:04:19.106808] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:41.876 [2024-02-14 19:04:19.106867] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 349:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:03:41.876 passed 00:03:41.876 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:41.876 Test: test_rdma_get_memory_translation ...passed 00:03:41.876 Test: test_get_rdma_qpair_from_wc ...passed 00:03:41.876 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:41.876 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:03:41.876 Test: test_nvme_rdma_qpair_set_poller ...[2024-02-14 19:04:19.106895] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1441:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:41.876 [2024-02-14 19:04:19.106911] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1452:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:41.876 [2024-02-14 19:04:19.106940] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3236:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.876 [2024-02-14 19:04:19.106955] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3236:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.876 [2024-02-14 19:04:19.106992] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2969:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:41.876 [2024-02-14 19:04:19.107008] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3015:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:41.876 [2024-02-14 19:04:19.107023] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 720:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820b77d10 on poll group 0x82b2d2000 00:03:41.876 [2024-02-14 19:04:19.107038] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2969:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:41.876 [2024-02-14 19:04:19.107054] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3015:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:41.876 [2024-02-14 19:04:19.107068] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 720:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820b77d10 on poll group 0x82b2d2000 00:03:41.876 [2024-02-14 19:04:19.107134] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 698:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:41.876 passed 00:03:41.876 00:03:41.876 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.876 suites 1 1 n/a 0 0 00:03:41.876 tests 22 22 22 0 0 00:03:41.876 asserts 412 412 412 0 n/a 00:03:41.876 00:03:41.876 Elapsed time = 0.000 seconds 00:03:41.876 00:03:41.876 real 0m0.009s 00:03:41.876 user 0m0.001s 00:03:41.876 sys 0m0.008s 00:03:41.876 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.876 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.876 ************************************ 00:03:41.876 END TEST unittest_nvme_rdma 00:03:41.876 ************************************ 00:03:41.876 19:04:19 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:41.877 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:41.877 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:41.877 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 ************************************ 00:03:41.877 START TEST unittest_nvmf_transport 00:03:41.877 ************************************ 00:03:41.877 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:41.877 00:03:41.877 00:03:41.877 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.877 http://cunit.sourceforge.net/ 00:03:41.877 00:03:41.877 00:03:41.877 Suite: nvmf 00:03:41.877 Test: test_spdk_nvmf_transport_create ...passed 00:03:41.877 Test: test_nvmf_transport_poll_group_create ...passed 00:03:41.877 Test: test_spdk_nvmf_transport_opts_init ...passed 00:03:41.877 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:41.877 00:03:41.877 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.877 suites 1 1 n/a 0 0 00:03:41.877 tests 4 4 4 0 0 00:03:41.877 asserts 49 49 49 0 n/a 00:03:41.877 00:03:41.877 Elapsed time = 0.000 seconds 00:03:41.877 [2024-02-14 19:04:19.152611] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:41.877 [2024-02-14 19:04:19.152855] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:41.877 [2024-02-14 19:04:19.152870] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 272:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:41.877 [2024-02-14 19:04:19.152900] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 255:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:41.877 [2024-02-14 19:04:19.152926] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:41.877 [2024-02-14 19:04:19.152937] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:41.877 [2024-02-14 19:04:19.152947] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:41.877 00:03:41.877 real 0m0.006s 00:03:41.877 user 0m0.000s 00:03:41.877 sys 0m0.008s 00:03:41.877 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.877 ************************************ 00:03:41.877 END TEST unittest_nvmf_transport 00:03:41.877 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 ************************************ 00:03:41.877 19:04:19 -- unit/unittest.sh@252 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:41.877 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:41.877 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:41.877 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 ************************************ 00:03:41.877 START TEST unittest_rdma 00:03:41.877 ************************************ 00:03:41.877 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:41.877 00:03:41.877 00:03:41.877 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.877 http://cunit.sourceforge.net/ 00:03:41.877 00:03:41.877 00:03:41.877 Suite: rdma_common 00:03:41.877 Test: test_spdk_rdma_pd ...passed 00:03:41.877 00:03:41.877 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.877 suites 1 1 n/a 0 0 00:03:41.877 tests 1 1 1 0 0 00:03:41.877 asserts 31 31 31 0 n/a 00:03:41.877 00:03:41.877 Elapsed time = 0.000 seconds 00:03:41.877 [2024-02-14 19:04:19.197380] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:41.877 [2024-02-14 19:04:19.197600] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:41.877 00:03:41.877 real 0m0.006s 00:03:41.877 user 0m0.005s 00:03:41.877 sys 0m0.004s 00:03:41.877 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.877 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 ************************************ 00:03:41.877 END TEST unittest_rdma 00:03:41.877 ************************************ 00:03:41.877 19:04:19 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:41.877 19:04:19 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:03:41.877 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:41.877 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:41.877 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:41.877 ************************************ 00:03:41.877 START TEST unittest_nvmf 00:03:41.877 ************************************ 00:03:41.877 19:04:19 -- common/autotest_common.sh@1102 -- # unittest_nvmf 00:03:41.877 19:04:19 -- unit/unittest.sh@106 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:41.877 00:03:41.877 00:03:41.877 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.877 http://cunit.sourceforge.net/ 00:03:41.877 00:03:41.877 00:03:41.877 Suite: nvmf 00:03:41.877 Test: test_get_log_page ...[2024-02-14 19:04:19.243835] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2538:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:41.877 passed 00:03:41.877 Test: test_process_fabrics_cmd ...passed 00:03:41.877 Test: test_connect ...[2024-02-14 19:04:19.244304] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 932:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:41.877 [2024-02-14 19:04:19.244342] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 795:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:41.877 [2024-02-14 19:04:19.244362] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 971:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:41.877 [2024-02-14 19:04:19.244381] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:41.877 [2024-02-14 19:04:19.244399] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:41.877 [2024-02-14 19:04:19.244418] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 814:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:41.877 [2024-02-14 19:04:19.244435] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 820:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:41.877 [2024-02-14 19:04:19.244452] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 846:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:41.877 [2024-02-14 19:04:19.244474] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:41.877 [2024-02-14 19:04:19.244495] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:41.877 [2024-02-14 19:04:19.244534] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 605:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:41.877 [2024-02-14 19:04:19.244555] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 612:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:41.877 [2024-02-14 19:04:19.244575] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 619:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:41.877 passed 00:03:41.877 Test: test_get_ns_id_desc_list ...[2024-02-14 19:04:19.244594] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 642:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:41.877 [2024-02-14 19:04:19.244621] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 242:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:03:41.877 [2024-02-14 19:04:19.244646] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 726:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group 0x0) 00:03:41.877 [2024-02-14 19:04:19.244665] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 726:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:03:41.877 passed 00:03:41.877 Test: test_identify_ns ...[2024-02-14 19:04:19.244741] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:41.877 passed 00:03:41.877 Test: test_identify_ns_iocs_specific ...[2024-02-14 19:04:19.244799] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:41.877 [2024-02-14 19:04:19.244836] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:41.877 [2024-02-14 19:04:19.244876] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:41.877 [2024-02-14 19:04:19.244952] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2632:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:41.877 passed 00:03:41.877 Test: test_reservation_write_exclusive ...passed 00:03:41.877 Test: test_reservation_exclusive_access ...passed 00:03:41.877 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:41.877 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:41.877 Test: test_reservation_notification_log_page ...passed 00:03:41.877 Test: test_get_dif_ctx ...passed 00:03:41.877 Test: test_set_get_features ...passed 00:03:41.877 Test: test_identify_ctrlr ...[2024-02-14 19:04:19.245123] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1568:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:41.877 [2024-02-14 19:04:19.245144] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1568:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:41.877 [2024-02-14 19:04:19.245160] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1579:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:41.877 [2024-02-14 19:04:19.245177] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1655:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:41.877 passed 00:03:41.877 Test: test_identify_ctrlr_iocs_specific ...passed 00:03:41.877 Test: test_custom_admin_cmd ...passed 00:03:41.877 Test: test_fused_compare_and_write ...passed 00:03:41.877 Test: test_multi_async_event_reqs ...passed 00:03:41.877 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:41.877 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:03:41.877 Test: test_multi_async_events ...passed 00:03:41.877 Test: test_rae ...passed 00:03:41.878 Test: test_nvmf_ctrlr_create_destruct ...[2024-02-14 19:04:19.245302] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4139:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:41.878 [2024-02-14 19:04:19.245321] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4128:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:41.878 [2024-02-14 19:04:19.245338] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4146:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:41.878 passed 00:03:41.878 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:41.878 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:03:41.878 Test: test_zcopy_read ...passed 00:03:41.878 Test: test_zcopy_write ...passed 00:03:41.878 Test: test_nvmf_property_set ...passed 00:03:41.878 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:03:41.878 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-02-14 19:04:19.245456] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4266:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:03:41.878 [2024-02-14 19:04:19.245508] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1866:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:41.878 [2024-02-14 19:04:19.245525] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1866:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:41.878 passed 00:03:41.878 00:03:41.878 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.878 suites 1 1 n/a 0 0 00:03:41.878 tests 30 30 30 0 0 00:03:41.878 asserts 889 889 889 0 n/a 00:03:41.878 00:03:41.878 Elapsed time = 0.000 seconds 00:03:41.878 [2024-02-14 19:04:19.245546] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1889:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:41.878 [2024-02-14 19:04:19.245563] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1895:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:41.878 [2024-02-14 19:04:19.245580] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1907:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:41.878 19:04:19 -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:41.878 00:03:41.878 00:03:41.878 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.878 http://cunit.sourceforge.net/ 00:03:41.878 00:03:41.878 00:03:41.878 Suite: nvmf 00:03:41.878 Test: test_get_rw_params ...passed 00:03:41.878 Test: test_lba_in_range ...passed 00:03:41.878 Test: test_get_dif_ctx ...passed 00:03:41.878 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:41.878 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-02-14 19:04:19.253626] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:41.878 passed 00:03:41.878 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:03:41.878 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:03:41.878 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:41.878 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:03:41.878 00:03:41.878 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.878 suites 1 1 n/a 0 0 00:03:41.878 tests 9 9 9 0 0 00:03:41.878 asserts 157 157 157 0 n/a 00:03:41.878 00:03:41.878 Elapsed time = 0.000 seconds[2024-02-14 19:04:19.253868] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:41.878 [2024-02-14 19:04:19.253885] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 451:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:41.878 [2024-02-14 19:04:19.253902] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:41.878 [2024-02-14 19:04:19.253915] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 954:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:41.878 [2024-02-14 19:04:19.253931] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:41.878 [2024-02-14 19:04:19.253943] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 397:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:41.878 [2024-02-14 19:04:19.253956] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:41.878 [2024-02-14 19:04:19.253969] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:41.878 00:03:41.878 19:04:19 -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:41.878 00:03:41.878 00:03:41.878 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.878 http://cunit.sourceforge.net/ 00:03:41.878 00:03:41.878 00:03:41.878 Suite: nvmf 00:03:41.878 Test: test_discovery_log ...passed 00:03:41.878 Test: test_discovery_log_with_filters ...passed 00:03:41.878 00:03:41.878 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.878 suites 1 1 n/a 0 0 00:03:41.878 tests 2 2 2 0 0 00:03:41.878 asserts 238 238 238 0 n/a 00:03:41.878 00:03:41.878 Elapsed time = 0.000 seconds 00:03:41.878 19:04:19 -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:41.878 00:03:41.878 00:03:41.878 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.878 http://cunit.sourceforge.net/ 00:03:41.878 00:03:41.878 00:03:41.878 Suite: nvmf 00:03:41.878 Test: nvmf_test_create_subsystem ...[2024-02-14 19:04:19.266529] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:41.878 [2024-02-14 19:04:19.266769] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:41.878 [2024-02-14 19:04:19.266786] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:41.878 [2024-02-14 19:04:19.266800] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:41.878 [2024-02-14 19:04:19.266812] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:41.878 [2024-02-14 19:04:19.266825] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:41.878 [2024-02-14 19:04:19.266851] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:41.878 [2024-02-14 19:04:19.266888] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:41.878 passed 00:03:41.878 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:03:41.878 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:41.878 Test: test_reservation_register ...passed 00:03:41.878 Test: test_reservation_register_with_ptpl ...[2024-02-14 19:04:19.266904] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:41.878 [2024-02-14 19:04:19.266917] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:41.878 [2024-02-14 19:04:19.266930] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:41.878 [2024-02-14 19:04:19.266974] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:41.878 [2024-02-14 19:04:19.266987] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:41.878 [2024-02-14 19:04:19.267040] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 [2024-02-14 19:04:19.267062] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:41.878 passed 00:03:41.878 Test: test_reservation_acquire_preempt_1 ...passed 00:03:41.878 Test: test_reservation_acquire_release_with_ptpl ...[2024-02-14 19:04:19.267269] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 passed 00:03:41.878 Test: test_reservation_release ...passed 00:03:41.878 Test: test_reservation_unregister_notification ...passed 00:03:41.878 Test: test_reservation_release_notification ...passed 00:03:41.878 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:41.878 Test: test_reservation_clear_notification ...[2024-02-14 19:04:19.267448] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 [2024-02-14 19:04:19.267471] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 [2024-02-14 19:04:19.267500] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 [2024-02-14 19:04:19.267519] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 passed 00:03:41.878 Test: test_reservation_preempt_notification ...passed 00:03:41.878 Test: test_spdk_nvmf_ns_event ...passed 00:03:41.878 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:41.878 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:41.878 Test: test_spdk_nvmf_subsystem_add_host ...[2024-02-14 19:04:19.267539] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 [2024-02-14 19:04:19.267559] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:41.878 [2024-02-14 19:04:19.267655] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 261:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:41.878 passed 00:03:41.878 Test: test_nvmf_ns_reservation_report ...passed 00:03:41.878 Test: test_nvmf_nqn_is_valid ...passed 00:03:41.879 Test: test_nvmf_ns_reservation_restore ...[2024-02-14 19:04:19.267677] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:03:41.879 [2024-02-14 19:04:19.267696] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3147:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:41.879 [2024-02-14 19:04:19.267728] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:41.879 [2024-02-14 19:04:19.267740] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:db165555-cb6b-11ee-af6b-4feeebbbadd": uuid is not the correct length 00:03:41.879 [2024-02-14 19:04:19.267753] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:41.879 [2024-02-14 19:04:19.267784] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:41.879 passed 00:03:41.879 Test: test_nvmf_subsystem_state_change ...passed 00:03:41.879 Test: test_nvmf_reservation_custom_ops ...passed 00:03:41.879 00:03:41.879 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.879 suites 1 1 n/a 0 0 00:03:41.879 tests 22 22 22 0 0 00:03:41.879 asserts 405 405 405 0 n/a 00:03:41.879 00:03:41.879 Elapsed time = 0.000 seconds 00:03:41.879 19:04:19 -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:41.879 00:03:41.879 00:03:41.879 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.879 http://cunit.sourceforge.net/ 00:03:41.879 00:03:41.879 00:03:41.879 Suite: nvmf 00:03:41.879 Test: test_nvmf_tcp_create ...[2024-02-14 19:04:19.279764] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:41.879 passed 00:03:41.879 Test: test_nvmf_tcp_destroy ...passed 00:03:41.879 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:42.140 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:42.140 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:42.140 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:42.140 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:42.140 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-02-14 19:04:19.292810] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.292861] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.292876] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.292889] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 passed 00:03:42.140 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:42.140 Test: test_nvmf_tcp_icreq_handle ...[2024-02-14 19:04:19.292901] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.292928] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:42.140 [2024-02-14 19:04:19.292941] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 passed 00:03:42.140 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:42.140 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:42.140 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-02-14 19:04:19.292952] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781488 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.292964] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:42.140 [2024-02-14 19:04:19.292975] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781488 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.292987] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.292999] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781488 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293012] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293023] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781488 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293044] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2485:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:42.140 [2024-02-14 19:04:19.293057] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293068] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781488 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293084] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820780d00 00:03:42.140 [2024-02-14 19:04:19.293097] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293108] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293122] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x820781570 00:03:42.140 [2024-02-14 19:04:19.293133] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293144] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293156] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:42.140 [2024-02-14 19:04:19.293168] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293179] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293194] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:42.140 [2024-02-14 19:04:19.293206] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 passed 00:03:42.140 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-02-14 19:04:19.293217] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293229] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293240] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293251] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293262] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293274] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293285] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293296] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293307] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293318] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293329] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 [2024-02-14 19:04:19.293341] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:42.140 [2024-02-14 19:04:19.293352] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820781570 is same with the state(5) to be set 00:03:42.140 passed 00:03:42.140 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-02-14 19:04:19.301124] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:42.140 passed 00:03:42.140 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-02-14 19:04:19.301171] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:42.140 passed 00:03:42.140 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-02-14 19:04:19.301341] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:42.141 [2024-02-14 19:04:19.301356] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:42.141 passed 00:03:42.141 00:03:42.141 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.141 suites 1 1 n/a 0 0 00:03:42.141 tests 17 17 17 0 0 00:03:42.141 asserts 222 222 222 0 n/a 00:03:42.141 00:03:42.141 Elapsed time = 0.016 seconds 00:03:42.141 [2024-02-14 19:04:19.301435] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:42.141 [2024-02-14 19:04:19.301448] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:42.141 19:04:19 -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:42.141 00:03:42.141 00:03:42.141 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.141 http://cunit.sourceforge.net/ 00:03:42.141 00:03:42.141 00:03:42.141 Suite: nvmf 00:03:42.141 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:42.141 00:03:42.141 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.141 suites 1 1 n/a 0 0 00:03:42.141 tests 1 1 1 0 0 00:03:42.141 asserts 17 17 17 0 n/a 00:03:42.141 00:03:42.141 Elapsed time = 0.008 seconds 00:03:42.141 00:03:42.141 real 0m0.079s 00:03:42.141 user 0m0.040s 00:03:42.141 sys 0m0.045s 00:03:42.141 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.141 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.141 ************************************ 00:03:42.141 END TEST unittest_nvmf 00:03:42.141 ************************************ 00:03:42.141 19:04:19 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.141 19:04:19 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.141 19:04:19 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:42.141 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.141 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.141 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.141 ************************************ 00:03:42.141 START TEST unittest_nvmf_rdma 00:03:42.141 ************************************ 00:03:42.141 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:42.141 00:03:42.141 00:03:42.141 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.141 http://cunit.sourceforge.net/ 00:03:42.141 00:03:42.141 00:03:42.141 Suite: nvmf 00:03:42.141 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-02-14 19:04:19.365636] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1917:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:42.141 [2024-02-14 19:04:19.365852] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1967:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:42.141 [2024-02-14 19:04:19.365865] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1967:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:42.141 passed 00:03:42.141 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:42.141 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:42.141 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:42.141 Test: test_nvmf_rdma_opts_init ...passed 00:03:42.141 Test: test_nvmf_rdma_request_free_data ...passed 00:03:42.141 Test: test_nvmf_rdma_update_ibv_state ...passed 00:03:42.141 Test: test_nvmf_rdma_resources_create ...[2024-02-14 19:04:19.366021] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:03:42.141 [2024-02-14 19:04:19.366033] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:03:42.141 passed 00:03:42.141 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:42.141 Test: test_nvmf_rdma_resize_cq ...[2024-02-14 19:04:19.366731] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1007:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:42.141 Using CQ of insufficient size may lead to CQ overrun 00:03:42.141 [2024-02-14 19:04:19.366744] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1012:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:42.141 passed 00:03:42.141 00:03:42.141 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.141 suites 1 1 n/a 0 0 00:03:42.141 tests 10 10 10 0 0 00:03:42.141 asserts 584 584 584 0 n/a 00:03:42.141 00:03:42.141 Elapsed time = 0.000 seconds 00:03:42.141 [2024-02-14 19:04:19.366782] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:42.141 00:03:42.141 real 0m0.007s 00:03:42.141 user 0m0.000s 00:03:42.141 sys 0m0.008s 00:03:42.141 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.141 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.141 ************************************ 00:03:42.141 END TEST unittest_nvmf_rdma 00:03:42.141 ************************************ 00:03:42.141 19:04:19 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.141 19:04:19 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:03:42.141 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.141 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.141 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.141 ************************************ 00:03:42.141 START TEST unittest_scsi 00:03:42.141 ************************************ 00:03:42.141 19:04:19 -- common/autotest_common.sh@1102 -- # unittest_scsi 00:03:42.141 19:04:19 -- unit/unittest.sh@115 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:42.141 00:03:42.141 00:03:42.141 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.141 http://cunit.sourceforge.net/ 00:03:42.141 00:03:42.141 00:03:42.141 Suite: dev_suite 00:03:42.141 Test: dev_destruct_null_dev ...passed 00:03:42.141 Test: dev_destruct_zero_luns ...passed 00:03:42.141 Test: dev_destruct_null_lun ...passed 00:03:42.141 Test: dev_destruct_success ...passed 00:03:42.141 Test: dev_construct_num_luns_zero ...passed 00:03:42.141 Test: dev_construct_no_lun_zero ...passed 00:03:42.141 Test: dev_construct_null_lun ...passed 00:03:42.141 Test: dev_construct_name_too_long ...passed 00:03:42.141 Test: dev_construct_success ...passed 00:03:42.141 Test: dev_construct_success_lun_zero_not_first ...[2024-02-14 19:04:19.415381] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:42.141 [2024-02-14 19:04:19.415653] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:42.141 [2024-02-14 19:04:19.415673] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:42.141 [2024-02-14 19:04:19.415690] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:42.141 passed 00:03:42.141 Test: dev_queue_mgmt_task_success ...passed 00:03:42.141 Test: dev_queue_task_success ...passed 00:03:42.141 Test: dev_stop_success ...passed 00:03:42.141 Test: dev_add_port_max_ports ...passed 00:03:42.141 Test: dev_add_port_construct_failure1 ...passed 00:03:42.141 Test: dev_add_port_construct_failure2 ...passed 00:03:42.141 Test: dev_add_port_success1 ...passed 00:03:42.141 Test: dev_add_port_success2 ...passed 00:03:42.141 Test: dev_add_port_success3 ...passed 00:03:42.141 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:42.141 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:42.141 Test: dev_find_port_by_id_success ...passed 00:03:42.141 Test: dev_add_lun_bdev_not_found ...passed 00:03:42.141 Test: dev_add_lun_no_free_lun_id ...passed 00:03:42.141 Test: dev_add_lun_success1 ...passed 00:03:42.141 Test: dev_add_lun_success2 ...passed 00:03:42.141 Test: dev_check_pending_tasks ...[2024-02-14 19:04:19.415742] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:42.141 [2024-02-14 19:04:19.415758] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:42.141 [2024-02-14 19:04:19.415772] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:42.141 [2024-02-14 19:04:19.416113] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:42.141 passed 00:03:42.141 Test: dev_iterate_luns ...passed 00:03:42.141 Test: dev_find_free_lun ...passed 00:03:42.141 00:03:42.141 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.141 suites 1 1 n/a 0 0 00:03:42.141 tests 29 29 29 0 0 00:03:42.141 asserts 97 97 97 0 n/a 00:03:42.141 00:03:42.141 Elapsed time = 0.000 seconds 00:03:42.141 19:04:19 -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:42.141 00:03:42.141 00:03:42.141 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.141 http://cunit.sourceforge.net/ 00:03:42.141 00:03:42.141 00:03:42.141 Suite: lun_suite 00:03:42.141 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:03:42.141 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:03:42.141 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:42.141 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:42.141 Test: lun_task_mgmt_execute_invalid_case ...passed 00:03:42.141 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:42.141 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:42.142 Test: lun_append_task_null_lun_not_supported ...passed 00:03:42.142 Test: lun_execute_scsi_task_pending ...passed 00:03:42.142 Test: lun_execute_scsi_task_complete ...passed 00:03:42.142 Test: lun_execute_scsi_task_resize ...passed 00:03:42.142 Test: lun_destruct_success ...passed 00:03:42.142 Test: lun_construct_null_ctx ...passed 00:03:42.142 Test: lun_construct_success ...passed 00:03:42.142 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:42.142 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:42.142 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:42.142 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:42.142 00:03:42.142 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.142 suites 1 1 n/a 0 0 00:03:42.142 tests 18 18 18 0 0 00:03:42.142 asserts 153 153 153 0 n/a 00:03:42.142 00:03:42.142 Elapsed time = 0.000 seconds 00:03:42.142 [2024-02-14 19:04:19.424330] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:42.142 [2024-02-14 19:04:19.424690] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:42.142 [2024-02-14 19:04:19.424740] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:42.142 [2024-02-14 19:04:19.424818] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:42.142 19:04:19 -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:42.142 00:03:42.142 00:03:42.142 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.142 http://cunit.sourceforge.net/ 00:03:42.142 00:03:42.142 00:03:42.142 Suite: scsi_suite 00:03:42.142 Test: scsi_init ...passed 00:03:42.142 00:03:42.142 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.142 suites 1 1 n/a 0 0 00:03:42.142 tests 1 1 1 0 0 00:03:42.142 asserts 1 1 1 0 n/a 00:03:42.142 00:03:42.142 Elapsed time = 0.000 seconds 00:03:42.142 19:04:19 -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:42.142 00:03:42.142 00:03:42.142 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.142 http://cunit.sourceforge.net/ 00:03:42.142 00:03:42.142 00:03:42.142 Suite: translation_suite 00:03:42.142 Test: mode_select_6_test ...passed 00:03:42.142 Test: mode_select_6_test2 ...passed 00:03:42.142 Test: mode_sense_6_test ...passed 00:03:42.142 Test: mode_sense_10_test ...passed 00:03:42.142 Test: inquiry_evpd_test ...passed 00:03:42.142 Test: inquiry_standard_test ...passed 00:03:42.142 Test: inquiry_overflow_test ...passed 00:03:42.142 Test: task_complete_test ...passed 00:03:42.142 Test: lba_range_test ...passed 00:03:42.142 Test: xfer_len_test ...passed 00:03:42.142 Test: xfer_test ...passed 00:03:42.142 Test: scsi_name_padding_test ...passed 00:03:42.142 Test: get_dif_ctx_test ...passed 00:03:42.142 Test: unmap_split_test ...passed 00:03:42.142 00:03:42.142 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.142 suites 1 1 n/a 0 0 00:03:42.142 tests 14 14 14 0 0 00:03:42.142 asserts 1204 1204 1204 0 n/a 00:03:42.142 00:03:42.142 Elapsed time = 0.000 seconds 00:03:42.142 [2024-02-14 19:04:19.439546] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:42.142 19:04:19 -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:42.142 00:03:42.142 00:03:42.142 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.142 http://cunit.sourceforge.net/ 00:03:42.142 00:03:42.142 00:03:42.142 Suite: reservation_suite 00:03:42.142 Test: test_reservation_register ...passed 00:03:42.142 Test: test_reservation_reserve ...passed 00:03:42.142 Test: test_reservation_preempt_non_all_regs ...passed 00:03:42.142 Test: test_reservation_preempt_all_regs ...[2024-02-14 19:04:19.447571] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:42.142 [2024-02-14 19:04:19.447919] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:42.142 [2024-02-14 19:04:19.447948] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:42.142 [2024-02-14 19:04:19.447973] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:42.142 [2024-02-14 19:04:19.448003] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:42.142 [2024-02-14 19:04:19.448022] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:42.142 [2024-02-14 19:04:19.448052] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:42.142 passed 00:03:42.142 Test: test_reservation_cmds_conflict ...[2024-02-14 19:04:19.448073] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:42.142 [2024-02-14 19:04:19.448090] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:42.142 passed 00:03:42.142 Test: test_scsi2_reserve_release ...passed 00:03:42.142 Test: test_pr_with_scsi2_reserve_release ...passed 00:03:42.142 00:03:42.142 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.142 suites 1 1 n/a 0 0 00:03:42.142 tests 7 7 7 0 0 00:03:42.142 asserts 257 257 257 0 n/a 00:03:42.142 00:03:42.142 Elapsed time = 0.000 seconds 00:03:42.142 [2024-02-14 19:04:19.448105] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:42.142 [2024-02-14 19:04:19.448120] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:42.142 [2024-02-14 19:04:19.448140] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:42.142 [2024-02-14 19:04:19.448148] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:42.142 [2024-02-14 19:04:19.448182] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:42.142 00:03:42.142 real 0m0.039s 00:03:42.142 user 0m0.021s 00:03:42.142 sys 0m0.028s 00:03:42.142 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.142 ************************************ 00:03:42.142 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.142 END TEST unittest_scsi 00:03:42.142 ************************************ 00:03:42.142 19:04:19 -- unit/unittest.sh@276 -- # uname -s 00:03:42.142 19:04:19 -- unit/unittest.sh@276 -- # '[' FreeBSD = Linux ']' 00:03:42.142 19:04:19 -- unit/unittest.sh@279 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:42.142 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.142 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.142 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.142 ************************************ 00:03:42.142 START TEST unittest_thread 00:03:42.142 ************************************ 00:03:42.142 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:42.142 00:03:42.142 00:03:42.142 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.142 http://cunit.sourceforge.net/ 00:03:42.142 00:03:42.142 00:03:42.142 Suite: io_channel 00:03:42.142 Test: thread_alloc ...passed 00:03:42.142 Test: thread_send_msg ...passed 00:03:42.142 Test: thread_poller ...passed 00:03:42.142 Test: poller_pause ...passed 00:03:42.142 Test: thread_for_each ...passed 00:03:42.142 Test: for_each_channel_remove ...passed 00:03:42.142 Test: for_each_channel_unreg ...[2024-02-14 19:04:19.498856] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x820c50104 already registered (old:0x82d8b6000 new:0x82d8b6180) 00:03:42.142 passed 00:03:42.142 Test: thread_name ...passed 00:03:42.142 Test: channel ...passed 00:03:42.142 Test: channel_destroy_races ...[2024-02-14 19:04:19.499524] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x226aa8 00:03:42.142 passed 00:03:42.142 Test: thread_exit_test ...[2024-02-14 19:04:19.499997] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 630:thread_exit: *ERROR*: thread 0x82d87ba80 got timeout, and move it to the exited state forcefully 00:03:42.142 passed 00:03:42.142 Test: thread_update_stats_test ...passed 00:03:42.142 Test: nested_channel ...passed 00:03:42.142 Test: device_unregister_and_thread_exit_race ...passed 00:03:42.142 Test: cache_closest_timed_poller ...passed 00:03:42.142 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:42.142 Test: io_device_lookup ...passed 00:03:42.142 Test: spdk_spin ...[2024-02-14 19:04:19.501042] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:42.142 [2024-02-14 19:04:19.501067] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820c50100 00:03:42.142 [2024-02-14 19:04:19.501087] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:42.142 [2024-02-14 19:04:19.501292] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:42.142 [2024-02-14 19:04:19.501312] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820c50100 00:03:42.142 [2024-02-14 19:04:19.501332] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:42.142 [2024-02-14 19:04:19.501351] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820c50100 00:03:42.143 [2024-02-14 19:04:19.501371] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:42.143 [2024-02-14 19:04:19.501390] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820c50100 00:03:42.143 [2024-02-14 19:04:19.501409] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:42.143 [2024-02-14 19:04:19.501427] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820c50100 00:03:42.143 passed 00:03:42.143 Test: for_each_channel_and_thread_exit_race ...passed 00:03:42.143 Test: for_each_thread_and_thread_exit_race ...passed 00:03:42.143 00:03:42.143 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.143 suites 1 1 n/a 0 0 00:03:42.143 tests 20 20 20 0 0 00:03:42.143 asserts 409 409 409 0 n/a 00:03:42.143 00:03:42.143 Elapsed time = 0.008 seconds 00:03:42.143 00:03:42.143 real 0m0.012s 00:03:42.143 user 0m0.012s 00:03:42.143 sys 0m0.005s 00:03:42.143 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.143 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.143 ************************************ 00:03:42.143 END TEST unittest_thread 00:03:42.143 ************************************ 00:03:42.143 19:04:19 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:42.143 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.143 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.143 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.143 ************************************ 00:03:42.143 START TEST unittest_iobuf 00:03:42.143 ************************************ 00:03:42.143 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:42.143 00:03:42.143 00:03:42.143 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.143 http://cunit.sourceforge.net/ 00:03:42.143 00:03:42.143 00:03:42.143 Suite: io_channel 00:03:42.143 Test: iobuf ...passed 00:03:42.143 Test: iobuf_cache ...[2024-02-14 19:04:19.549509] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 313:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:42.143 [2024-02-14 19:04:19.549837] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 315:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:42.143 [2024-02-14 19:04:19.549910] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 325:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:42.143 [2024-02-14 19:04:19.549929] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 327:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:42.143 [2024-02-14 19:04:19.549949] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 313:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:42.143 [2024-02-14 19:04:19.549964] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 315:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:42.143 passed 00:03:42.143 00:03:42.143 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.143 suites 1 1 n/a 0 0 00:03:42.143 tests 2 2 2 0 0 00:03:42.143 asserts 107 107 107 0 n/a 00:03:42.143 00:03:42.143 Elapsed time = 0.000 seconds 00:03:42.143 00:03:42.143 real 0m0.008s 00:03:42.143 user 0m0.000s 00:03:42.143 sys 0m0.008s 00:03:42.143 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.143 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.143 ************************************ 00:03:42.143 END TEST unittest_iobuf 00:03:42.143 ************************************ 00:03:42.407 19:04:19 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:03:42.407 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.407 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.407 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.407 ************************************ 00:03:42.407 START TEST unittest_util 00:03:42.407 ************************************ 00:03:42.407 19:04:19 -- common/autotest_common.sh@1102 -- # unittest_util 00:03:42.407 19:04:19 -- unit/unittest.sh@132 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:42.407 00:03:42.407 00:03:42.407 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.407 http://cunit.sourceforge.net/ 00:03:42.407 00:03:42.407 00:03:42.407 Suite: base64 00:03:42.407 Test: test_base64_get_encoded_strlen ...passed 00:03:42.407 Test: test_base64_get_decoded_len ...passed 00:03:42.407 Test: test_base64_encode ...passed 00:03:42.407 Test: test_base64_decode ...passed 00:03:42.407 Test: test_base64_urlsafe_encode ...passed 00:03:42.407 Test: test_base64_urlsafe_decode ...passed 00:03:42.407 00:03:42.407 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.407 suites 1 1 n/a 0 0 00:03:42.407 tests 6 6 6 0 0 00:03:42.407 asserts 112 112 112 0 n/a 00:03:42.407 00:03:42.407 Elapsed time = 0.000 seconds 00:03:42.407 19:04:19 -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:42.407 00:03:42.407 00:03:42.407 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.407 http://cunit.sourceforge.net/ 00:03:42.407 00:03:42.407 00:03:42.407 Suite: bit_array 00:03:42.407 Test: test_1bit ...passed 00:03:42.407 Test: test_64bit ...passed 00:03:42.407 Test: test_find ...passed 00:03:42.407 Test: test_resize ...passed 00:03:42.407 Test: test_errors ...passed 00:03:42.407 Test: test_count ...passed 00:03:42.407 Test: test_mask_store_load ...passed 00:03:42.407 Test: test_mask_clear ...passed 00:03:42.407 00:03:42.407 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.407 suites 1 1 n/a 0 0 00:03:42.407 tests 8 8 8 0 0 00:03:42.407 asserts 5075 5075 5075 0 n/a 00:03:42.407 00:03:42.407 Elapsed time = 0.000 seconds 00:03:42.407 19:04:19 -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:42.407 00:03:42.407 00:03:42.407 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.407 http://cunit.sourceforge.net/ 00:03:42.407 00:03:42.407 00:03:42.407 Suite: cpuset 00:03:42.407 Test: test_cpuset ...passed 00:03:42.407 Test: test_cpuset_parse ...[2024-02-14 19:04:19.612215] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:42.407 passed 00:03:42.407 Test: test_cpuset_fmt ...passed 00:03:42.407 00:03:42.407 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.407 suites 1 1 n/a 0 0 00:03:42.407 tests 3 3 3 0 0 00:03:42.407 asserts 65 65 65 0 n/a 00:03:42.407 00:03:42.407 Elapsed time = 0.000 seconds 00:03:42.407 [2024-02-14 19:04:19.612506] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:42.407 [2024-02-14 19:04:19.612526] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:42.407 [2024-02-14 19:04:19.612540] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:42.407 [2024-02-14 19:04:19.612553] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:42.407 [2024-02-14 19:04:19.612565] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:42.407 [2024-02-14 19:04:19.612594] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:42.407 [2024-02-14 19:04:19.612616] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:42.407 19:04:19 -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:42.407 00:03:42.407 00:03:42.407 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.407 http://cunit.sourceforge.net/ 00:03:42.407 00:03:42.407 00:03:42.407 Suite: crc16 00:03:42.407 Test: test_crc16_t10dif ...passed 00:03:42.407 Test: test_crc16_t10dif_seed ...passed 00:03:42.407 Test: test_crc16_t10dif_copy ...passed 00:03:42.407 00:03:42.407 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.407 suites 1 1 n/a 0 0 00:03:42.407 tests 3 3 3 0 0 00:03:42.407 asserts 5 5 5 0 n/a 00:03:42.407 00:03:42.407 Elapsed time = 0.000 seconds 00:03:42.407 19:04:19 -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:42.407 00:03:42.407 00:03:42.407 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.407 http://cunit.sourceforge.net/ 00:03:42.407 00:03:42.407 00:03:42.407 Suite: crc32_ieee 00:03:42.408 Test: test_crc32_ieee ...passed 00:03:42.408 00:03:42.408 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.408 suites 1 1 n/a 0 0 00:03:42.408 tests 1 1 1 0 0 00:03:42.408 asserts 1 1 1 0 n/a 00:03:42.408 00:03:42.408 Elapsed time = 0.000 seconds 00:03:42.408 19:04:19 -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:42.408 00:03:42.408 00:03:42.408 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.408 http://cunit.sourceforge.net/ 00:03:42.408 00:03:42.408 00:03:42.408 Suite: crc32c 00:03:42.408 Test: test_crc32c ...passed 00:03:42.408 Test: test_crc32c_nvme ...passed 00:03:42.408 00:03:42.408 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.408 suites 1 1 n/a 0 0 00:03:42.408 tests 2 2 2 0 0 00:03:42.408 asserts 16 16 16 0 n/a 00:03:42.408 00:03:42.408 Elapsed time = 0.000 seconds 00:03:42.408 19:04:19 -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:42.408 00:03:42.408 00:03:42.408 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.408 http://cunit.sourceforge.net/ 00:03:42.408 00:03:42.408 00:03:42.408 Suite: crc64 00:03:42.408 Test: test_crc64_nvme ...passed 00:03:42.408 00:03:42.408 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.408 suites 1 1 n/a 0 0 00:03:42.408 tests 1 1 1 0 0 00:03:42.408 asserts 4 4 4 0 n/a 00:03:42.408 00:03:42.408 Elapsed time = 0.000 seconds 00:03:42.408 19:04:19 -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:42.408 00:03:42.408 00:03:42.408 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.408 http://cunit.sourceforge.net/ 00:03:42.408 00:03:42.408 00:03:42.408 Suite: string 00:03:42.408 Test: test_parse_ip_addr ...passed 00:03:42.408 Test: test_str_chomp ...passed 00:03:42.408 Test: test_parse_capacity ...passed 00:03:42.408 Test: test_sprintf_append_realloc ...passed 00:03:42.408 Test: test_strtol ...passed 00:03:42.408 Test: test_strtoll ...passed 00:03:42.408 Test: test_strarray ...passed 00:03:42.408 Test: test_strcpy_replace ...passed 00:03:42.408 00:03:42.408 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.408 suites 1 1 n/a 0 0 00:03:42.408 tests 8 8 8 0 0 00:03:42.408 asserts 161 161 161 0 n/a 00:03:42.408 00:03:42.408 Elapsed time = 0.000 seconds 00:03:42.408 19:04:19 -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:42.408 00:03:42.408 00:03:42.408 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.408 http://cunit.sourceforge.net/ 00:03:42.408 00:03:42.408 00:03:42.408 Suite: dif 00:03:42.408 Test: dif_generate_and_verify_test ...[2024-02-14 19:04:19.650231] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:42.408 [2024-02-14 19:04:19.650699] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:42.408 [2024-02-14 19:04:19.650787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:42.408 passed 00:03:42.408 Test: dif_disable_check_test ...[2024-02-14 19:04:19.650868] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:42.408 [2024-02-14 19:04:19.650946] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:42.408 [2024-02-14 19:04:19.651023] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:42.408 [2024-02-14 19:04:19.651295] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:42.408 [2024-02-14 19:04:19.651439] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:42.408 [2024-02-14 19:04:19.651555] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:42.408 passed 00:03:42.408 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-02-14 19:04:19.651919] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:42.408 [2024-02-14 19:04:19.652004] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:42.408 [2024-02-14 19:04:19.652084] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:42.408 [2024-02-14 19:04:19.652164] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:42.408 [2024-02-14 19:04:19.652242] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:42.408 [2024-02-14 19:04:19.652320] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:42.408 [2024-02-14 19:04:19.652397] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:42.408 [2024-02-14 19:04:19.652474] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:42.408 [2024-02-14 19:04:19.652553] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:42.408 [2024-02-14 19:04:19.652631] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:42.408 passed 00:03:42.408 Test: dif_apptag_mask_test ...passed 00:03:42.408 Test: dif_sec_512_md_0_error_test ...passed 00:03:42.408 Test: dif_sec_4096_md_0_error_test ...passed 00:03:42.408 Test: dif_sec_4100_md_128_error_test ...passed 00:03:42.408 Test: dif_guard_seed_test ...passed 00:03:42.408 Test: dif_guard_value_test ...passed 00:03:42.408 Test: dif_disable_sec_512_md_8_single_iov_test ...[2024-02-14 19:04:19.652708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:42.408 [2024-02-14 19:04:19.652792] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:42.408 [2024-02-14 19:04:19.652873] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:42.408 [2024-02-14 19:04:19.652926] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:42.408 [2024-02-14 19:04:19.652949] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:42.408 [2024-02-14 19:04:19.652967] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:42.408 [2024-02-14 19:04:19.652988] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:42.408 [2024-02-14 19:04:19.653005] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:42.408 passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:42.408 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:42.408 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-02-14 19:04:19.659727] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.408 [2024-02-14 19:04:19.660033] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:03:42.408 [2024-02-14 19:04:19.660326] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.408 [2024-02-14 19:04:19.660618] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.408 [2024-02-14 19:04:19.660910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.408 [2024-02-14 19:04:19.661213] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.408 [2024-02-14 19:04:19.661509] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.408 [2024-02-14 19:04:19.661665] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1416 00:03:42.408 [2024-02-14 19:04:19.661822] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.408 [2024-02-14 19:04:19.662112] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:03:42.408 [2024-02-14 19:04:19.662424] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.408 [2024-02-14 19:04:19.662721] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.663019] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.663316] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.663628] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.409 [2024-02-14 19:04:19.663787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a76cb97d 00:03:42.409 [2024-02-14 19:04:19.663944] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.409 [2024-02-14 19:04:19.664233] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:03:42.409 [2024-02-14 19:04:19.664526] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.664822] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.665117] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.409 [2024-02-14 19:04:19.665412] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.409 [2024-02-14 19:04:19.665711] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.409 passed 00:03:42.409 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-02-14 19:04:19.665873] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cd1c79728e1b1de3 00:03:42.409 [2024-02-14 19:04:19.665909] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.409 [2024-02-14 19:04:19.665950] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:03:42.409 [2024-02-14 19:04:19.665990] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666038] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666079] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.409 [2024-02-14 19:04:19.666119] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.409 [2024-02-14 19:04:19.666159] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.409 [2024-02-14 19:04:19.666188] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1416 00:03:42.409 [2024-02-14 19:04:19.666218] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.409 [2024-02-14 19:04:19.666258] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:03:42.409 [2024-02-14 19:04:19.666297] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666337] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666377] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.666416] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.666457] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.409 [2024-02-14 19:04:19.666487] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a76cb97d 00:03:42.409 [2024-02-14 19:04:19.666516] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.409 [2024-02-14 19:04:19.666557] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:03:42.409 [2024-02-14 19:04:19.666597] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666637] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666677] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.409 passed 00:03:42.409 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-02-14 19:04:19.666717] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.409 [2024-02-14 19:04:19.666757] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.409 [2024-02-14 19:04:19.666786] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cd1c79728e1b1de3 00:03:42.409 [2024-02-14 19:04:19.666820] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.409 [2024-02-14 19:04:19.666860] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:03:42.409 [2024-02-14 19:04:19.666900] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666939] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.666980] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.409 [2024-02-14 19:04:19.667019] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.409 [2024-02-14 19:04:19.667059] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.409 [2024-02-14 19:04:19.667088] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1416 00:03:42.409 [2024-02-14 19:04:19.667118] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.409 [2024-02-14 19:04:19.667157] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:03:42.409 [2024-02-14 19:04:19.667197] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.667237] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.667277] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.667316] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.667371] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.409 [2024-02-14 19:04:19.667401] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a76cb97d 00:03:42.409 [2024-02-14 19:04:19.667431] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.409 [2024-02-14 19:04:19.667471] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:03:42.409 [2024-02-14 19:04:19.667511] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.667552] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 passed 00:03:42.409 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-02-14 19:04:19.667592] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.409 [2024-02-14 19:04:19.667632] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.409 [2024-02-14 19:04:19.667672] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.409 [2024-02-14 19:04:19.667701] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cd1c79728e1b1de3 00:03:42.409 [2024-02-14 19:04:19.667735] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.409 [2024-02-14 19:04:19.667775] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:03:42.409 [2024-02-14 19:04:19.667814] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.667854] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.667895] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.409 [2024-02-14 19:04:19.667935] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.409 [2024-02-14 19:04:19.667975] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.409 [2024-02-14 19:04:19.668005] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1416 00:03:42.409 [2024-02-14 19:04:19.668035] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.409 [2024-02-14 19:04:19.668074] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:03:42.409 [2024-02-14 19:04:19.668114] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.668154] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.409 [2024-02-14 19:04:19.668194] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.409 [2024-02-14 19:04:19.668233] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.410 [2024-02-14 19:04:19.668272] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.410 [2024-02-14 19:04:19.668301] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a76cb97d 00:03:42.410 [2024-02-14 19:04:19.668330] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.410 [2024-02-14 19:04:19.668371] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:03:42.410 [2024-02-14 19:04:19.668410] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.668449] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.668488] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.410 passed 00:03:42.410 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...passed 00:03:42.410 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-02-14 19:04:19.668528] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.410 [2024-02-14 19:04:19.668568] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.410 [2024-02-14 19:04:19.668597] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cd1c79728e1b1de3 00:03:42.410 [2024-02-14 19:04:19.668629] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.410 [2024-02-14 19:04:19.668668] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:03:42.410 [2024-02-14 19:04:19.668708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.668748] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.668787] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.410 [2024-02-14 19:04:19.668828] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.410 [2024-02-14 19:04:19.668868] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.410 [2024-02-14 19:04:19.668897] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1416 00:03:42.410 [2024-02-14 19:04:19.668929] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.410 [2024-02-14 19:04:19.668968] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:03:42.410 [2024-02-14 19:04:19.669008] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669048] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669088] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.410 [2024-02-14 19:04:19.669127] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.410 [2024-02-14 19:04:19.669167] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.410 [2024-02-14 19:04:19.669196] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a76cb97d 00:03:42.410 [2024-02-14 19:04:19.669225] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.410 [2024-02-14 19:04:19.669265] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:03:42.410 [2024-02-14 19:04:19.669305] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669345] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669385] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.410 [2024-02-14 19:04:19.669425] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.410 passed 00:03:42.410 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-02-14 19:04:19.669465] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.410 [2024-02-14 19:04:19.669494] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cd1c79728e1b1de3 00:03:42.410 [2024-02-14 19:04:19.669528] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.410 [2024-02-14 19:04:19.669568] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:03:42.410 [2024-02-14 19:04:19.669608] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669648] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669687] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.410 [2024-02-14 19:04:19.669727] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.410 passed 00:03:42.410 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-02-14 19:04:19.669768] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.410 [2024-02-14 19:04:19.669798] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1416 00:03:42.410 [2024-02-14 19:04:19.669830] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.410 [2024-02-14 19:04:19.669871] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38564660, Actual=38574660 00:03:42.410 [2024-02-14 19:04:19.669910] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669949] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.669990] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.410 [2024-02-14 19:04:19.670029] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.410 [2024-02-14 19:04:19.670069] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.410 [2024-02-14 19:04:19.670099] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a76cb97d 00:03:42.410 [2024-02-14 19:04:19.670128] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.410 [2024-02-14 19:04:19.670168] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88000a2d4837a266, Actual=88010a2d4837a266 00:03:42.410 [2024-02-14 19:04:19.670208] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.670248] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.670296] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.410 [2024-02-14 19:04:19.670336] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.410 [2024-02-14 19:04:19.670376] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.410 passed 00:03:42.410 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:42.410 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-02-14 19:04:19.670405] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=cd1c79728e1b1de3 00:03:42.410 passed 00:03:42.410 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:42.410 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:42.410 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:42.410 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:42.410 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:42.410 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:42.410 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:42.410 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-02-14 19:04:19.675707] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.410 [2024-02-14 19:04:19.675878] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:03:42.410 [2024-02-14 19:04:19.676051] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.676211] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.410 [2024-02-14 19:04:19.676371] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.410 [2024-02-14 19:04:19.676532] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.410 [2024-02-14 19:04:19.676690] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.410 [2024-02-14 19:04:19.676849] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c721 00:03:42.410 [2024-02-14 19:04:19.677009] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.410 [2024-02-14 19:04:19.677171] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=aa8ef4e6, Actual=aa8ff4e6 00:03:42.411 [2024-02-14 19:04:19.677329] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.677486] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.677644] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.411 [2024-02-14 19:04:19.677802] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.411 [2024-02-14 19:04:19.677959] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.411 [2024-02-14 19:04:19.678116] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=759dbfb1 00:03:42.411 [2024-02-14 19:04:19.678280] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.411 [2024-02-14 19:04:19.678439] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6f6f3d9425203af3, Actual=6f6e3d9425203af3 00:03:42.411 [2024-02-14 19:04:19.678597] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.678755] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.678913] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.411 [2024-02-14 19:04:19.679071] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.411 [2024-02-14 19:04:19.679236] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.411 [2024-02-14 19:04:19.679407] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7d7b594490f68f27 00:03:42.411 passed 00:03:42.411 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-02-14 19:04:19.679459] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.411 [2024-02-14 19:04:19.679500] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:03:42.411 [2024-02-14 19:04:19.679543] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.679584] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.679625] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.411 [2024-02-14 19:04:19.679667] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.411 [2024-02-14 19:04:19.679708] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.411 [2024-02-14 19:04:19.679748] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c721 00:03:42.411 [2024-02-14 19:04:19.679789] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.411 [2024-02-14 19:04:19.679830] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=aa8ef4e6, Actual=aa8ff4e6 00:03:42.411 [2024-02-14 19:04:19.679880] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.679921] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.679961] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.411 [2024-02-14 19:04:19.680001] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.411 [2024-02-14 19:04:19.680041] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.411 [2024-02-14 19:04:19.680081] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=759dbfb1 00:03:42.411 [2024-02-14 19:04:19.680122] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.411 [2024-02-14 19:04:19.680163] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6f6f3d9425203af3, Actual=6f6e3d9425203af3 00:03:42.411 [2024-02-14 19:04:19.680204] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.680243] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.680284] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.411 [2024-02-14 19:04:19.680324] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.411 [2024-02-14 19:04:19.680365] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.411 passed 00:03:42.411 Test: dix_sec_512_md_0_error ...passed 00:03:42.411 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-02-14 19:04:19.680405] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7d7b594490f68f27 00:03:42.411 [2024-02-14 19:04:19.680416] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:42.411 passed 00:03:42.411 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:42.411 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:42.411 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:42.411 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:42.411 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:42.411 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:42.411 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:42.411 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:42.411 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-02-14 19:04:19.685498] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.411 [2024-02-14 19:04:19.685668] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:03:42.411 [2024-02-14 19:04:19.685830] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.685990] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.686157] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.411 [2024-02-14 19:04:19.686316] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.411 [2024-02-14 19:04:19.686482] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.411 [2024-02-14 19:04:19.686637] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c721 00:03:42.411 [2024-02-14 19:04:19.686790] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.411 [2024-02-14 19:04:19.686943] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=aa8ef4e6, Actual=aa8ff4e6 00:03:42.411 [2024-02-14 19:04:19.687097] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.687248] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.687433] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.411 [2024-02-14 19:04:19.687599] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.411 [2024-02-14 19:04:19.687755] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.411 [2024-02-14 19:04:19.687911] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=759dbfb1 00:03:42.411 [2024-02-14 19:04:19.688066] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.411 [2024-02-14 19:04:19.688222] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6f6f3d9425203af3, Actual=6f6e3d9425203af3 00:03:42.411 [2024-02-14 19:04:19.688377] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.688533] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.411 [2024-02-14 19:04:19.688689] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.411 [2024-02-14 19:04:19.688844] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.411 [2024-02-14 19:04:19.688999] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.411 [2024-02-14 19:04:19.689152] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7d7b594490f68f27 00:03:42.411 passed 00:03:42.411 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-02-14 19:04:19.689203] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:03:42.411 [2024-02-14 19:04:19.689244] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:03:42.411 [2024-02-14 19:04:19.689284] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.412 [2024-02-14 19:04:19.689324] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.412 [2024-02-14 19:04:19.689364] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.412 [2024-02-14 19:04:19.689405] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058 00:03:42.412 [2024-02-14 19:04:19.689445] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=334e 00:03:42.412 [2024-02-14 19:04:19.689485] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c721 00:03:42.412 [2024-02-14 19:04:19.689525] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab653ed, Actual=1ab753ed 00:03:42.412 [2024-02-14 19:04:19.689565] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=aa8ef4e6, Actual=aa8ff4e6 00:03:42.412 [2024-02-14 19:04:19.689605] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.412 [2024-02-14 19:04:19.689644] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.412 [2024-02-14 19:04:19.689683] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.412 [2024-02-14 19:04:19.689723] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058 00:03:42.412 [2024-02-14 19:04:19.689762] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7717268c 00:03:42.412 [2024-02-14 19:04:19.689801] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=759dbfb1 00:03:42.412 [2024-02-14 19:04:19.689841] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:42.412 [2024-02-14 19:04:19.689881] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=6f6f3d9425203af3, Actual=6f6e3d9425203af3 00:03:42.412 [2024-02-14 19:04:19.689920] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.412 [2024-02-14 19:04:19.689960] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:03:42.412 [2024-02-14 19:04:19.690000] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.412 [2024-02-14 19:04:19.690040] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:03:42.412 [2024-02-14 19:04:19.690080] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b834ace272bc7c93 00:03:42.412 passed 00:03:42.412 Test: set_md_interleave_iovs_test ...[2024-02-14 19:04:19.690119] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=7d7b594490f68f27 00:03:42.412 passed 00:03:42.412 Test: set_md_interleave_iovs_split_test ...passed 00:03:42.412 Test: dif_generate_stream_pi_16_test ...passed 00:03:42.412 Test: dif_generate_stream_test ...passed 00:03:42.412 Test: set_md_interleave_iovs_alignment_test ...[2024-02-14 19:04:19.690969] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:42.412 passed 00:03:42.412 Test: dif_generate_split_test ...passed 00:03:42.412 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:42.412 Test: dif_verify_split_test ...passed 00:03:42.412 Test: dif_verify_stream_multi_segments_test ...passed 00:03:42.412 Test: update_crc32c_pi_16_test ...passed 00:03:42.412 Test: update_crc32c_test ...passed 00:03:42.412 Test: dif_update_crc32c_split_test ...passed 00:03:42.412 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:42.412 Test: get_range_with_md_test ...passed 00:03:42.412 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:42.412 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:42.412 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:42.412 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:42.412 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:42.412 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:42.412 Test: dif_generate_and_verify_unmap_test ...passed 00:03:42.412 00:03:42.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.412 suites 1 1 n/a 0 0 00:03:42.412 tests 79 79 79 0 0 00:03:42.412 asserts 3584 3584 3584 0 n/a 00:03:42.412 00:03:42.412 Elapsed time = 0.039 seconds 00:03:42.412 19:04:19 -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:42.412 00:03:42.412 00:03:42.412 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.412 http://cunit.sourceforge.net/ 00:03:42.412 00:03:42.412 00:03:42.412 Suite: iov 00:03:42.412 Test: test_single_iov ...passed 00:03:42.412 Test: test_simple_iov ...passed 00:03:42.412 Test: test_complex_iov ...passed 00:03:42.412 Test: test_iovs_to_buf ...passed 00:03:42.412 Test: test_buf_to_iovs ...passed 00:03:42.412 Test: test_memset ...passed 00:03:42.412 Test: test_iov_one ...passed 00:03:42.412 Test: test_iov_xfer ...passed 00:03:42.412 00:03:42.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.412 suites 1 1 n/a 0 0 00:03:42.412 tests 8 8 8 0 0 00:03:42.412 asserts 156 156 156 0 n/a 00:03:42.412 00:03:42.412 Elapsed time = 0.000 seconds 00:03:42.412 19:04:19 -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:42.412 00:03:42.412 00:03:42.412 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.412 http://cunit.sourceforge.net/ 00:03:42.412 00:03:42.412 00:03:42.412 Suite: math 00:03:42.412 Test: test_serial_number_arithmetic ...passed 00:03:42.412 Suite: erase 00:03:42.412 Test: test_memset_s ...passed 00:03:42.412 00:03:42.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.412 suites 2 2 n/a 0 0 00:03:42.412 tests 2 2 2 0 0 00:03:42.412 asserts 18 18 18 0 n/a 00:03:42.412 00:03:42.412 Elapsed time = 0.000 seconds 00:03:42.412 19:04:19 -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:42.412 00:03:42.412 00:03:42.412 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.412 http://cunit.sourceforge.net/ 00:03:42.412 00:03:42.412 00:03:42.412 Suite: pipe 00:03:42.412 Test: test_create_destroy ...passed 00:03:42.412 Test: test_write_get_buffer ...passed 00:03:42.412 Test: test_write_advance ...passed 00:03:42.412 Test: test_read_get_buffer ...passed 00:03:42.412 Test: test_read_advance ...passed 00:03:42.412 Test: test_data ...passed 00:03:42.412 00:03:42.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.412 suites 1 1 n/a 0 0 00:03:42.412 tests 6 6 6 0 0 00:03:42.412 asserts 251 251 251 0 n/a 00:03:42.412 00:03:42.412 Elapsed time = 0.000 seconds 00:03:42.412 19:04:19 -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:42.412 00:03:42.412 00:03:42.412 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.412 http://cunit.sourceforge.net/ 00:03:42.412 00:03:42.412 00:03:42.412 Suite: xor 00:03:42.412 Test: test_xor_gen ...passed 00:03:42.412 00:03:42.412 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.412 suites 1 1 n/a 0 0 00:03:42.412 tests 1 1 1 0 0 00:03:42.412 asserts 17 17 17 0 n/a 00:03:42.412 00:03:42.412 Elapsed time = 0.000 seconds 00:03:42.412 00:03:42.412 real 0m0.134s 00:03:42.412 user 0m0.045s 00:03:42.412 sys 0m0.089s 00:03:42.412 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.412 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.412 ************************************ 00:03:42.412 END TEST unittest_util 00:03:42.412 ************************************ 00:03:42.412 19:04:19 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.413 19:04:19 -- unit/unittest.sh@285 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:42.413 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.413 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.413 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.413 ************************************ 00:03:42.413 START TEST unittest_dma 00:03:42.413 ************************************ 00:03:42.413 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:42.413 00:03:42.413 00:03:42.413 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.413 http://cunit.sourceforge.net/ 00:03:42.413 00:03:42.413 00:03:42.413 Suite: dma_suite 00:03:42.413 Test: test_dma ...[2024-02-14 19:04:19.772645] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:42.413 passed 00:03:42.413 00:03:42.413 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.413 suites 1 1 n/a 0 0 00:03:42.413 tests 1 1 1 0 0 00:03:42.413 asserts 50 50 50 0 n/a 00:03:42.413 00:03:42.413 Elapsed time = 0.000 seconds 00:03:42.413 00:03:42.413 real 0m0.006s 00:03:42.413 user 0m0.000s 00:03:42.413 sys 0m0.008s 00:03:42.413 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.413 ************************************ 00:03:42.413 END TEST unittest_dma 00:03:42.413 ************************************ 00:03:42.413 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.413 19:04:19 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:03:42.413 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.413 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.413 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.413 ************************************ 00:03:42.413 START TEST unittest_init 00:03:42.413 ************************************ 00:03:42.413 19:04:19 -- common/autotest_common.sh@1102 -- # unittest_init 00:03:42.413 19:04:19 -- unit/unittest.sh@148 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:42.413 00:03:42.413 00:03:42.413 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.413 http://cunit.sourceforge.net/ 00:03:42.413 00:03:42.413 00:03:42.413 Suite: subsystem_suite 00:03:42.413 Test: subsystem_sort_test_depends_on_single ...passed 00:03:42.413 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:42.413 Test: subsystem_sort_test_missing_dependency ...passed 00:03:42.413 00:03:42.413 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.413 suites 1 1 n/a 0 0 00:03:42.413 tests 3 3 3 0 0 00:03:42.413 asserts 20 20 20 0 n/a 00:03:42.413 00:03:42.413 Elapsed time = 0.000 seconds 00:03:42.413 [2024-02-14 19:04:19.813572] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:42.413 [2024-02-14 19:04:19.813800] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:42.413 00:03:42.413 real 0m0.006s 00:03:42.413 user 0m0.006s 00:03:42.413 sys 0m0.000s 00:03:42.413 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.413 ************************************ 00:03:42.413 END TEST unittest_init 00:03:42.413 ************************************ 00:03:42.413 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.670 19:04:19 -- unit/unittest.sh@289 -- # '[' no = yes ']' 00:03:42.670 00:03:42.670 00:03:42.670 ===================== 00:03:42.670 All unit tests passed 00:03:42.670 19:04:19 -- unit/unittest.sh@302 -- # set +x 00:03:42.670 ===================== 00:03:42.670 WARN: lcov not installed or SPDK built without coverage! 00:03:42.670 WARN: neither valgrind nor ASAN is enabled! 00:03:42.670 00:03:42.670 00:03:42.670 00:03:42.670 real 0m15.868s 00:03:42.670 user 0m13.135s 00:03:42.670 sys 0m1.643s 00:03:42.670 19:04:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.670 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.670 ************************************ 00:03:42.670 END TEST unittest 00:03:42.670 ************************************ 00:03:42.670 19:04:19 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:42.670 19:04:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:42.670 19:04:19 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:42.670 19:04:19 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:42.670 19:04:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:42.670 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.670 19:04:19 -- spdk/autotest.sh@175 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:42.670 19:04:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.670 19:04:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.670 19:04:19 -- common/autotest_common.sh@10 -- # set +x 00:03:42.670 ************************************ 00:03:42.670 START TEST env 00:03:42.670 ************************************ 00:03:42.670 19:04:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:42.928 * Looking for test storage... 00:03:42.928 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:03:42.928 19:04:20 -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:42.928 19:04:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.928 19:04:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.928 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:42.928 ************************************ 00:03:42.928 START TEST env_memory 00:03:42.928 ************************************ 00:03:42.928 19:04:20 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:42.928 00:03:42.928 00:03:42.928 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.928 http://cunit.sourceforge.net/ 00:03:42.928 00:03:42.928 00:03:42.928 Suite: memory 00:03:42.928 Test: alloc and free memory map ...[2024-02-14 19:04:20.233865] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.928 passed 00:03:42.928 Test: mem map translation ...[2024-02-14 19:04:20.241960] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.928 [2024-02-14 19:04:20.242022] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.928 [2024-02-14 19:04:20.242040] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.928 [2024-02-14 19:04:20.242050] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.928 passed 00:03:42.928 Test: mem map registration ...[2024-02-14 19:04:20.251402] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:42.928 [2024-02-14 19:04:20.251451] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:42.928 passed 00:03:42.928 Test: mem map adjacent registrations ...passed 00:03:42.928 00:03:42.928 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.928 suites 1 1 n/a 0 0 00:03:42.928 tests 4 4 4 0 0 00:03:42.928 asserts 152 152 152 0 n/a 00:03:42.928 00:03:42.928 Elapsed time = 0.031 seconds 00:03:42.928 00:03:42.928 real 0m0.049s 00:03:42.928 user 0m0.033s 00:03:42.928 sys 0m0.016s 00:03:42.928 19:04:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.928 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:42.928 ************************************ 00:03:42.928 END TEST env_memory 00:03:42.928 ************************************ 00:03:42.929 19:04:20 -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:42.929 19:04:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:42.929 19:04:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:42.929 19:04:20 -- common/autotest_common.sh@10 -- # set +x 00:03:42.929 ************************************ 00:03:42.929 START TEST env_vtophys 00:03:42.929 ************************************ 00:03:42.929 19:04:20 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:42.929 EAL: lib.eal log level changed from notice to debug 00:03:42.929 EAL: Sysctl reports 10 cpus 00:03:42.929 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 1 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 2 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 3 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 4 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 5 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 6 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 7 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 8 as core 0 on socket 0 00:03:42.929 EAL: Detected lcore 9 as core 0 on socket 0 00:03:42.929 EAL: Maximum logical cores by configuration: 128 00:03:42.929 EAL: Detected CPU lcores: 10 00:03:42.929 EAL: Detected NUMA nodes: 1 00:03:42.929 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:42.929 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:42.929 EAL: Checking presence of .so 'librte_eal.so' 00:03:42.929 EAL: Detected static linkage of DPDK 00:03:42.929 EAL: No shared files mode enabled, IPC will be disabled 00:03:43.187 EAL: PCI scan found 10 devices 00:03:43.187 EAL: Specific IOVA mode is not requested, autodetecting 00:03:43.187 EAL: Selecting IOVA mode according to bus requests 00:03:43.187 EAL: Bus pci wants IOVA as 'PA' 00:03:43.187 EAL: Selected IOVA mode 'PA' 00:03:43.187 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:43.187 EAL: Ask a virtual area of 0x2e000 bytes 00:03:43.187 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x10003eb000) not respected! 00:03:43.187 EAL: This may cause issues with mapping memory into secondary processes 00:03:43.187 EAL: Virtual area found at 0x10003eb000 (size = 0x2e000) 00:03:43.187 EAL: Setting up physically contiguous memory... 00:03:43.187 EAL: Ask a virtual area of 0x1000 bytes 00:03:43.187 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000e30000) not respected! 00:03:43.187 EAL: This may cause issues with mapping memory into secondary processes 00:03:43.187 EAL: Virtual area found at 0x1000e30000 (size = 0x1000) 00:03:43.187 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:43.187 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:43.187 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:43.187 EAL: This may cause issues with mapping memory into secondary processes 00:03:43.187 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:43.187 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:43.187 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x200000000, len 268435456 00:03:43.187 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x210000000, len 268435456 00:03:43.445 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x220000000, len 268435456 00:03:43.445 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x230000000, len 268435456 00:03:43.445 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x240000000, len 268435456 00:03:43.704 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x250000000, len 268435456 00:03:43.704 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x260000000, len 268435456 00:03:43.962 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x270000000, len 268435456 00:03:43.962 EAL: No shared files mode enabled, IPC is disabled 00:03:43.962 EAL: Added 2048M to heap on socket 0 00:03:43.963 EAL: TSC is not safe to use in SMP mode 00:03:43.963 EAL: TSC is not invariant 00:03:43.963 EAL: TSC frequency is ~2100001 KHz 00:03:43.963 EAL: Main lcore 0 is ready (tid=82d26a000;cpuset=[0]) 00:03:43.963 EAL: PCI scan found 10 devices 00:03:43.963 EAL: Registering mem event callbacks not supported 00:03:43.963 00:03:43.963 00:03:43.963 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.963 http://cunit.sourceforge.net/ 00:03:43.963 00:03:43.963 00:03:43.963 Suite: components_suite 00:03:43.963 Test: vtophys_malloc_test ...passed 00:03:44.530 Test: vtophys_spdk_malloc_test ...passed 00:03:44.530 00:03:44.530 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.530 suites 1 1 n/a 0 0 00:03:44.530 tests 2 2 2 0 0 00:03:44.530 asserts 497 497 497 0 n/a 00:03:44.530 00:03:44.530 Elapsed time = 0.578 seconds 00:03:44.530 00:03:44.530 real 0m1.472s 00:03:44.530 user 0m0.585s 00:03:44.530 sys 0m0.885s 00:03:44.530 ************************************ 00:03:44.530 END TEST env_vtophys 00:03:44.530 ************************************ 00:03:44.530 19:04:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.530 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.530 19:04:21 -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:44.530 19:04:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:44.530 19:04:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:44.530 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.530 ************************************ 00:03:44.530 START TEST env_pci 00:03:44.530 ************************************ 00:03:44.530 19:04:21 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:44.530 00:03:44.530 00:03:44.530 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.530 http://cunit.sourceforge.net/ 00:03:44.530 00:03:44.530 00:03:44.530 Suite: pci 00:03:44.530 Test: pci_hook ...passed 00:03:44.530 00:03:44.530 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.530 suites 1 1 n/a 0 0 00:03:44.530 tests 1 1 1 0 0 00:03:44.530 asserts 25 25 25 0 n/a 00:03:44.530 00:03:44.530 Elapsed time = 0.000 seconds 00:03:44.530 EAL: Cannot find device (10000:00:01.0) 00:03:44.530 EAL: Failed to attach device on primary process 00:03:44.530 00:03:44.530 real 0m0.011s 00:03:44.530 user 0m0.001s 00:03:44.530 sys 0m0.010s 00:03:44.530 19:04:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.530 ************************************ 00:03:44.530 END TEST env_pci 00:03:44.530 ************************************ 00:03:44.530 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.530 19:04:21 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.530 19:04:21 -- env/env.sh@15 -- # uname 00:03:44.530 19:04:21 -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:44.530 19:04:21 -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:44.530 19:04:21 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:03:44.530 19:04:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:44.530 19:04:21 -- common/autotest_common.sh@10 -- # set +x 00:03:44.530 ************************************ 00:03:44.530 START TEST env_dpdk_post_init 00:03:44.530 ************************************ 00:03:44.530 19:04:21 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:44.530 EAL: Sysctl reports 10 cpus 00:03:44.530 EAL: Detected CPU lcores: 10 00:03:44.530 EAL: Detected NUMA nodes: 1 00:03:44.530 EAL: Detected static linkage of DPDK 00:03:44.530 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.530 EAL: Selected IOVA mode 'PA' 00:03:44.530 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:44.788 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x200000000, len 268435456 00:03:44.788 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x210000000, len 268435456 00:03:44.788 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x220000000, len 268435456 00:03:45.047 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x230000000, len 268435456 00:03:45.047 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x240000000, len 268435456 00:03:45.047 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x250000000, len 268435456 00:03:45.306 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x260000000, len 268435456 00:03:45.306 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x270000000, len 268435456 00:03:45.306 EAL: TSC is not safe to use in SMP mode 00:03:45.306 EAL: TSC is not invariant 00:03:45.306 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.306 [2024-02-14 19:04:22.629418] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:45.306 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:45.306 Starting DPDK initialization... 00:03:45.306 Starting SPDK post initialization... 00:03:45.306 SPDK NVMe probe 00:03:45.306 Attaching to 0000:00:06.0 00:03:45.306 Attached to 0000:00:06.0 00:03:45.306 Cleaning up... 00:03:45.306 00:03:45.306 real 0m0.797s 00:03:45.306 user 0m0.016s 00:03:45.306 sys 0m0.777s 00:03:45.306 ************************************ 00:03:45.306 END TEST env_dpdk_post_init 00:03:45.306 ************************************ 00:03:45.306 19:04:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.306 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.565 19:04:22 -- env/env.sh@26 -- # uname 00:03:45.565 19:04:22 -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:45.565 ************************************ 00:03:45.565 END TEST env 00:03:45.565 ************************************ 00:03:45.565 00:03:45.565 real 0m2.833s 00:03:45.565 user 0m0.829s 00:03:45.565 sys 0m2.080s 00:03:45.565 19:04:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.565 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.565 19:04:22 -- spdk/autotest.sh@176 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.565 19:04:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:45.565 19:04:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:45.565 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.565 ************************************ 00:03:45.565 START TEST rpc 00:03:45.565 ************************************ 00:03:45.565 19:04:22 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.565 * Looking for test storage... 00:03:45.565 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:45.565 19:04:22 -- rpc/rpc.sh@65 -- # spdk_pid=46340 00:03:45.565 19:04:22 -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:45.565 19:04:22 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:45.565 19:04:22 -- rpc/rpc.sh@67 -- # waitforlisten 46340 00:03:45.565 19:04:22 -- common/autotest_common.sh@817 -- # '[' -z 46340 ']' 00:03:45.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.565 19:04:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.565 19:04:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:45.565 19:04:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.565 19:04:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:45.565 19:04:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.565 [2024-02-14 19:04:22.959691] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:45.565 [2024-02-14 19:04:22.959962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:46.502 EAL: TSC is not safe to use in SMP mode 00:03:46.502 EAL: TSC is not invariant 00:03:46.502 [2024-02-14 19:04:23.718829] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.502 [2024-02-14 19:04:23.848662] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:46.502 [2024-02-14 19:04:23.848802] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.502 [2024-02-14 19:04:23.848817] app.c: 490:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 46340' to capture a snapshot of events at runtime. 00:03:46.502 [2024-02-14 19:04:23.848851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.761 19:04:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:46.761 19:04:23 -- common/autotest_common.sh@850 -- # return 0 00:03:46.761 19:04:23 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.761 19:04:23 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.761 19:04:23 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:46.761 19:04:23 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:46.761 19:04:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:46.761 19:04:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:46.761 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.761 ************************************ 00:03:46.761 START TEST rpc_integrity 00:03:46.761 ************************************ 00:03:46.761 19:04:23 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:03:46.761 19:04:23 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:46.761 19:04:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.761 19:04:23 -- common/autotest_common.sh@10 -- # set +x 00:03:46.761 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.761 19:04:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:46.761 19:04:24 -- rpc/rpc.sh@13 -- # jq length 00:03:46.761 19:04:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:46.761 19:04:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:46.761 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.761 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.761 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.761 19:04:24 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:46.761 19:04:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:46.761 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.761 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.761 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.761 19:04:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:46.762 { 00:03:46.762 "name": "Malloc0", 00:03:46.762 "aliases": [ 00:03:46.762 "ddec550e-cb6b-11ee-af6b-4feeebbbadda" 00:03:46.762 ], 00:03:46.762 "product_name": "Malloc disk", 00:03:46.762 "block_size": 512, 00:03:46.762 "num_blocks": 16384, 00:03:46.762 "uuid": "ddec550e-cb6b-11ee-af6b-4feeebbbadda", 00:03:46.762 "assigned_rate_limits": { 00:03:46.762 "rw_ios_per_sec": 0, 00:03:46.762 "rw_mbytes_per_sec": 0, 00:03:46.762 "r_mbytes_per_sec": 0, 00:03:46.762 "w_mbytes_per_sec": 0 00:03:46.762 }, 00:03:46.762 "claimed": false, 00:03:46.762 "zoned": false, 00:03:46.762 "supported_io_types": { 00:03:46.762 "read": true, 00:03:46.762 "write": true, 00:03:46.762 "unmap": true, 00:03:46.762 "write_zeroes": true, 00:03:46.762 "flush": true, 00:03:46.762 "reset": true, 00:03:46.762 "compare": false, 00:03:46.762 "compare_and_write": false, 00:03:46.762 "abort": true, 00:03:46.762 "nvme_admin": false, 00:03:46.762 "nvme_io": false 00:03:46.762 }, 00:03:46.762 "memory_domains": [ 00:03:46.762 { 00:03:46.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.762 "dma_device_type": 2 00:03:46.762 } 00:03:46.762 ], 00:03:46.762 "driver_specific": {} 00:03:46.762 } 00:03:46.762 ]' 00:03:46.762 19:04:24 -- rpc/rpc.sh@17 -- # jq length 00:03:46.762 19:04:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:46.762 19:04:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:46.762 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.762 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.762 [2024-02-14 19:04:24.061605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:46.762 [2024-02-14 19:04:24.061651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:46.762 [2024-02-14 19:04:24.062216] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b70e780 00:03:46.762 [2024-02-14 19:04:24.062234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:46.762 [2024-02-14 19:04:24.063213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:46.762 [2024-02-14 19:04:24.063239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:46.762 Passthru0 00:03:46.762 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.762 19:04:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:46.762 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.762 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.762 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.762 19:04:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:46.762 { 00:03:46.762 "name": "Malloc0", 00:03:46.762 "aliases": [ 00:03:46.762 "ddec550e-cb6b-11ee-af6b-4feeebbbadda" 00:03:46.762 ], 00:03:46.762 "product_name": "Malloc disk", 00:03:46.762 "block_size": 512, 00:03:46.762 "num_blocks": 16384, 00:03:46.762 "uuid": "ddec550e-cb6b-11ee-af6b-4feeebbbadda", 00:03:46.762 "assigned_rate_limits": { 00:03:46.762 "rw_ios_per_sec": 0, 00:03:46.762 "rw_mbytes_per_sec": 0, 00:03:46.762 "r_mbytes_per_sec": 0, 00:03:46.762 "w_mbytes_per_sec": 0 00:03:46.762 }, 00:03:46.762 "claimed": true, 00:03:46.762 "claim_type": "exclusive_write", 00:03:46.762 "zoned": false, 00:03:46.762 "supported_io_types": { 00:03:46.762 "read": true, 00:03:46.762 "write": true, 00:03:46.762 "unmap": true, 00:03:46.762 "write_zeroes": true, 00:03:46.762 "flush": true, 00:03:46.762 "reset": true, 00:03:46.762 "compare": false, 00:03:46.762 "compare_and_write": false, 00:03:46.762 "abort": true, 00:03:46.762 "nvme_admin": false, 00:03:46.762 "nvme_io": false 00:03:46.762 }, 00:03:46.762 "memory_domains": [ 00:03:46.762 { 00:03:46.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.762 "dma_device_type": 2 00:03:46.762 } 00:03:46.762 ], 00:03:46.762 "driver_specific": {} 00:03:46.762 }, 00:03:46.762 { 00:03:46.762 "name": "Passthru0", 00:03:46.762 "aliases": [ 00:03:46.762 "d8e257f8-f465-dd54-8d0e-5398c4b1c322" 00:03:46.762 ], 00:03:46.762 "product_name": "passthru", 00:03:46.762 "block_size": 512, 00:03:46.762 "num_blocks": 16384, 00:03:46.762 "uuid": "d8e257f8-f465-dd54-8d0e-5398c4b1c322", 00:03:46.762 "assigned_rate_limits": { 00:03:46.762 "rw_ios_per_sec": 0, 00:03:46.762 "rw_mbytes_per_sec": 0, 00:03:46.762 "r_mbytes_per_sec": 0, 00:03:46.762 "w_mbytes_per_sec": 0 00:03:46.762 }, 00:03:46.762 "claimed": false, 00:03:46.762 "zoned": false, 00:03:46.762 "supported_io_types": { 00:03:46.762 "read": true, 00:03:46.762 "write": true, 00:03:46.762 "unmap": true, 00:03:46.762 "write_zeroes": true, 00:03:46.762 "flush": true, 00:03:46.762 "reset": true, 00:03:46.762 "compare": false, 00:03:46.762 "compare_and_write": false, 00:03:46.762 "abort": true, 00:03:46.762 "nvme_admin": false, 00:03:46.762 "nvme_io": false 00:03:46.762 }, 00:03:46.762 "memory_domains": [ 00:03:46.762 { 00:03:46.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:46.762 "dma_device_type": 2 00:03:46.762 } 00:03:46.762 ], 00:03:46.762 "driver_specific": { 00:03:46.762 "passthru": { 00:03:46.762 "name": "Passthru0", 00:03:46.762 "base_bdev_name": "Malloc0" 00:03:46.762 } 00:03:46.762 } 00:03:46.762 } 00:03:46.762 ]' 00:03:46.762 19:04:24 -- rpc/rpc.sh@21 -- # jq length 00:03:46.762 19:04:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:46.762 19:04:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:46.762 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.762 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.762 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.762 19:04:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:46.762 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.762 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.762 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.762 19:04:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:46.762 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:46.762 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.762 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:46.762 19:04:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:46.762 19:04:24 -- rpc/rpc.sh@26 -- # jq length 00:03:46.762 19:04:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:46.762 00:03:46.762 real 0m0.146s 00:03:46.762 user 0m0.029s 00:03:46.762 sys 0m0.050s 00:03:46.762 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.762 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:46.762 ************************************ 00:03:46.762 END TEST rpc_integrity 00:03:46.762 ************************************ 00:03:47.021 19:04:24 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.021 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.021 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 ************************************ 00:03:47.021 START TEST rpc_plugins 00:03:47.021 ************************************ 00:03:47.021 19:04:24 -- common/autotest_common.sh@1102 -- # rpc_plugins 00:03:47.021 19:04:24 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.021 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.021 19:04:24 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.021 19:04:24 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.021 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.021 19:04:24 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.021 { 00:03:47.021 "name": "Malloc1", 00:03:47.021 "aliases": [ 00:03:47.021 "de072f3a-cb6b-11ee-af6b-4feeebbbadda" 00:03:47.021 ], 00:03:47.021 "product_name": "Malloc disk", 00:03:47.021 "block_size": 4096, 00:03:47.021 "num_blocks": 256, 00:03:47.021 "uuid": "de072f3a-cb6b-11ee-af6b-4feeebbbadda", 00:03:47.021 "assigned_rate_limits": { 00:03:47.021 "rw_ios_per_sec": 0, 00:03:47.021 "rw_mbytes_per_sec": 0, 00:03:47.021 "r_mbytes_per_sec": 0, 00:03:47.021 "w_mbytes_per_sec": 0 00:03:47.021 }, 00:03:47.021 "claimed": false, 00:03:47.021 "zoned": false, 00:03:47.021 "supported_io_types": { 00:03:47.021 "read": true, 00:03:47.021 "write": true, 00:03:47.021 "unmap": true, 00:03:47.021 "write_zeroes": true, 00:03:47.021 "flush": true, 00:03:47.021 "reset": true, 00:03:47.021 "compare": false, 00:03:47.021 "compare_and_write": false, 00:03:47.021 "abort": true, 00:03:47.021 "nvme_admin": false, 00:03:47.021 "nvme_io": false 00:03:47.021 }, 00:03:47.021 "memory_domains": [ 00:03:47.021 { 00:03:47.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.021 "dma_device_type": 2 00:03:47.021 } 00:03:47.021 ], 00:03:47.021 "driver_specific": {} 00:03:47.021 } 00:03:47.021 ]' 00:03:47.021 19:04:24 -- rpc/rpc.sh@32 -- # jq length 00:03:47.021 19:04:24 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.021 19:04:24 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.021 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.021 19:04:24 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.021 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.021 19:04:24 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.021 19:04:24 -- rpc/rpc.sh@36 -- # jq length 00:03:47.021 19:04:24 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.021 00:03:47.021 real 0m0.067s 00:03:47.021 user 0m0.032s 00:03:47.021 sys 0m0.003s 00:03:47.021 ************************************ 00:03:47.021 END TEST rpc_plugins 00:03:47.021 ************************************ 00:03:47.021 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 19:04:24 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.021 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.021 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 ************************************ 00:03:47.021 START TEST rpc_trace_cmd_test 00:03:47.021 ************************************ 00:03:47.021 19:04:24 -- common/autotest_common.sh@1102 -- # rpc_trace_cmd_test 00:03:47.021 19:04:24 -- rpc/rpc.sh@40 -- # local info 00:03:47.021 19:04:24 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:47.021 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.021 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.021 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.021 19:04:24 -- rpc/rpc.sh@42 -- # info='{ 00:03:47.021 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid46340", 00:03:47.021 "tpoint_group_mask": "0x8", 00:03:47.021 "iscsi_conn": { 00:03:47.021 "mask": "0x2", 00:03:47.021 "tpoint_mask": "0x0" 00:03:47.021 }, 00:03:47.021 "scsi": { 00:03:47.021 "mask": "0x4", 00:03:47.021 "tpoint_mask": "0x0" 00:03:47.021 }, 00:03:47.021 "bdev": { 00:03:47.021 "mask": "0x8", 00:03:47.021 "tpoint_mask": "0xffffffffffffffff" 00:03:47.021 }, 00:03:47.021 "nvmf_rdma": { 00:03:47.021 "mask": "0x10", 00:03:47.021 "tpoint_mask": "0x0" 00:03:47.021 }, 00:03:47.021 "nvmf_tcp": { 00:03:47.021 "mask": "0x20", 00:03:47.021 "tpoint_mask": "0x0" 00:03:47.021 }, 00:03:47.021 "blobfs": { 00:03:47.021 "mask": "0x80", 00:03:47.021 "tpoint_mask": "0x0" 00:03:47.021 }, 00:03:47.021 "dsa": { 00:03:47.021 "mask": "0x200", 00:03:47.021 "tpoint_mask": "0x0" 00:03:47.021 }, 00:03:47.022 "thread": { 00:03:47.022 "mask": "0x400", 00:03:47.022 "tpoint_mask": "0x0" 00:03:47.022 }, 00:03:47.022 "nvme_pcie": { 00:03:47.022 "mask": "0x800", 00:03:47.022 "tpoint_mask": "0x0" 00:03:47.022 }, 00:03:47.022 "iaa": { 00:03:47.022 "mask": "0x1000", 00:03:47.022 "tpoint_mask": "0x0" 00:03:47.022 }, 00:03:47.022 "nvme_tcp": { 00:03:47.022 "mask": "0x2000", 00:03:47.022 "tpoint_mask": "0x0" 00:03:47.022 }, 00:03:47.022 "bdev_nvme": { 00:03:47.022 "mask": "0x4000", 00:03:47.022 "tpoint_mask": "0x0" 00:03:47.022 } 00:03:47.022 }' 00:03:47.022 19:04:24 -- rpc/rpc.sh@43 -- # jq length 00:03:47.022 19:04:24 -- rpc/rpc.sh@43 -- # '[' 14 -gt 2 ']' 00:03:47.022 19:04:24 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:47.022 19:04:24 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:47.022 19:04:24 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:47.022 19:04:24 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:47.022 19:04:24 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:47.022 19:04:24 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:47.022 19:04:24 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.022 19:04:24 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.022 00:03:47.022 real 0m0.058s 00:03:47.022 user 0m0.033s 00:03:47.022 sys 0m0.018s 00:03:47.022 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.022 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.022 ************************************ 00:03:47.022 END TEST rpc_trace_cmd_test 00:03:47.022 ************************************ 00:03:47.022 19:04:24 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:47.022 19:04:24 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.022 19:04:24 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.022 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.022 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.022 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.022 ************************************ 00:03:47.022 START TEST rpc_daemon_integrity 00:03:47.022 ************************************ 00:03:47.022 19:04:24 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:03:47.022 19:04:24 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.022 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.022 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.022 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.022 19:04:24 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.022 19:04:24 -- rpc/rpc.sh@13 -- # jq length 00:03:47.022 19:04:24 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.022 19:04:24 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.022 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.022 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.280 19:04:24 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:47.280 19:04:24 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.280 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.280 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.280 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.280 19:04:24 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.280 { 00:03:47.281 "name": "Malloc2", 00:03:47.281 "aliases": [ 00:03:47.281 "de2a9616-cb6b-11ee-af6b-4feeebbbadda" 00:03:47.281 ], 00:03:47.281 "product_name": "Malloc disk", 00:03:47.281 "block_size": 512, 00:03:47.281 "num_blocks": 16384, 00:03:47.281 "uuid": "de2a9616-cb6b-11ee-af6b-4feeebbbadda", 00:03:47.281 "assigned_rate_limits": { 00:03:47.281 "rw_ios_per_sec": 0, 00:03:47.281 "rw_mbytes_per_sec": 0, 00:03:47.281 "r_mbytes_per_sec": 0, 00:03:47.281 "w_mbytes_per_sec": 0 00:03:47.281 }, 00:03:47.281 "claimed": false, 00:03:47.281 "zoned": false, 00:03:47.281 "supported_io_types": { 00:03:47.281 "read": true, 00:03:47.281 "write": true, 00:03:47.281 "unmap": true, 00:03:47.281 "write_zeroes": true, 00:03:47.281 "flush": true, 00:03:47.281 "reset": true, 00:03:47.281 "compare": false, 00:03:47.281 "compare_and_write": false, 00:03:47.281 "abort": true, 00:03:47.281 "nvme_admin": false, 00:03:47.281 "nvme_io": false 00:03:47.281 }, 00:03:47.281 "memory_domains": [ 00:03:47.281 { 00:03:47.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.281 "dma_device_type": 2 00:03:47.281 } 00:03:47.281 ], 00:03:47.281 "driver_specific": {} 00:03:47.281 } 00:03:47.281 ]' 00:03:47.281 19:04:24 -- rpc/rpc.sh@17 -- # jq length 00:03:47.281 19:04:24 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.281 19:04:24 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:47.281 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.281 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 [2024-02-14 19:04:24.473624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:47.281 [2024-02-14 19:04:24.473670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.281 [2024-02-14 19:04:24.473701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b70e780 00:03:47.281 [2024-02-14 19:04:24.473708] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.281 [2024-02-14 19:04:24.474387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.281 [2024-02-14 19:04:24.474418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.281 Passthru0 00:03:47.281 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.281 19:04:24 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.281 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.281 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.281 19:04:24 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.281 { 00:03:47.281 "name": "Malloc2", 00:03:47.281 "aliases": [ 00:03:47.281 "de2a9616-cb6b-11ee-af6b-4feeebbbadda" 00:03:47.281 ], 00:03:47.281 "product_name": "Malloc disk", 00:03:47.281 "block_size": 512, 00:03:47.281 "num_blocks": 16384, 00:03:47.281 "uuid": "de2a9616-cb6b-11ee-af6b-4feeebbbadda", 00:03:47.281 "assigned_rate_limits": { 00:03:47.281 "rw_ios_per_sec": 0, 00:03:47.281 "rw_mbytes_per_sec": 0, 00:03:47.281 "r_mbytes_per_sec": 0, 00:03:47.281 "w_mbytes_per_sec": 0 00:03:47.281 }, 00:03:47.281 "claimed": true, 00:03:47.281 "claim_type": "exclusive_write", 00:03:47.281 "zoned": false, 00:03:47.281 "supported_io_types": { 00:03:47.281 "read": true, 00:03:47.281 "write": true, 00:03:47.281 "unmap": true, 00:03:47.281 "write_zeroes": true, 00:03:47.281 "flush": true, 00:03:47.281 "reset": true, 00:03:47.281 "compare": false, 00:03:47.281 "compare_and_write": false, 00:03:47.281 "abort": true, 00:03:47.281 "nvme_admin": false, 00:03:47.281 "nvme_io": false 00:03:47.281 }, 00:03:47.281 "memory_domains": [ 00:03:47.281 { 00:03:47.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.281 "dma_device_type": 2 00:03:47.281 } 00:03:47.281 ], 00:03:47.281 "driver_specific": {} 00:03:47.281 }, 00:03:47.281 { 00:03:47.281 "name": "Passthru0", 00:03:47.281 "aliases": [ 00:03:47.281 "4813d902-f849-d35d-adde-e056753e5d9f" 00:03:47.281 ], 00:03:47.281 "product_name": "passthru", 00:03:47.281 "block_size": 512, 00:03:47.281 "num_blocks": 16384, 00:03:47.281 "uuid": "4813d902-f849-d35d-adde-e056753e5d9f", 00:03:47.281 "assigned_rate_limits": { 00:03:47.281 "rw_ios_per_sec": 0, 00:03:47.281 "rw_mbytes_per_sec": 0, 00:03:47.281 "r_mbytes_per_sec": 0, 00:03:47.281 "w_mbytes_per_sec": 0 00:03:47.281 }, 00:03:47.281 "claimed": false, 00:03:47.281 "zoned": false, 00:03:47.281 "supported_io_types": { 00:03:47.281 "read": true, 00:03:47.281 "write": true, 00:03:47.281 "unmap": true, 00:03:47.281 "write_zeroes": true, 00:03:47.281 "flush": true, 00:03:47.281 "reset": true, 00:03:47.281 "compare": false, 00:03:47.281 "compare_and_write": false, 00:03:47.281 "abort": true, 00:03:47.281 "nvme_admin": false, 00:03:47.281 "nvme_io": false 00:03:47.281 }, 00:03:47.281 "memory_domains": [ 00:03:47.281 { 00:03:47.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.281 "dma_device_type": 2 00:03:47.281 } 00:03:47.281 ], 00:03:47.281 "driver_specific": { 00:03:47.281 "passthru": { 00:03:47.281 "name": "Passthru0", 00:03:47.281 "base_bdev_name": "Malloc2" 00:03:47.281 } 00:03:47.281 } 00:03:47.281 } 00:03:47.281 ]' 00:03:47.281 19:04:24 -- rpc/rpc.sh@21 -- # jq length 00:03:47.281 19:04:24 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.281 19:04:24 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.281 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.281 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.281 19:04:24 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:47.281 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.281 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.281 19:04:24 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.281 19:04:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:47.281 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 19:04:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:47.281 19:04:24 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.281 19:04:24 -- rpc/rpc.sh@26 -- # jq length 00:03:47.281 19:04:24 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.281 00:03:47.281 real 0m0.142s 00:03:47.281 user 0m0.037s 00:03:47.281 sys 0m0.040s 00:03:47.281 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.281 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.281 ************************************ 00:03:47.281 END TEST rpc_daemon_integrity 00:03:47.281 ************************************ 00:03:47.281 19:04:24 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.281 19:04:24 -- rpc/rpc.sh@84 -- # killprocess 46340 00:03:47.281 19:04:24 -- common/autotest_common.sh@924 -- # '[' -z 46340 ']' 00:03:47.281 19:04:24 -- common/autotest_common.sh@928 -- # kill -0 46340 00:03:47.281 19:04:24 -- common/autotest_common.sh@929 -- # uname 00:03:47.281 19:04:24 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:03:47.281 19:04:24 -- common/autotest_common.sh@932 -- # ps -c -o command 46340 00:03:47.281 19:04:24 -- common/autotest_common.sh@932 -- # tail -1 00:03:47.281 19:04:24 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:03:47.281 19:04:24 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:03:47.281 killing process with pid 46340 00:03:47.281 19:04:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 46340' 00:03:47.281 19:04:24 -- common/autotest_common.sh@943 -- # kill 46340 00:03:47.281 19:04:24 -- common/autotest_common.sh@948 -- # wait 46340 00:03:47.542 00:03:47.542 real 0m2.173s 00:03:47.542 user 0m1.919s 00:03:47.542 sys 0m1.251s 00:03:47.542 19:04:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.542 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 ************************************ 00:03:47.542 END TEST rpc 00:03:47.542 ************************************ 00:03:47.800 19:04:24 -- spdk/autotest.sh@177 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:47.800 19:04:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.800 19:04:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.800 19:04:24 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 ************************************ 00:03:47.800 START TEST rpc_client 00:03:47.800 ************************************ 00:03:47.800 19:04:24 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:47.800 * Looking for test storage... 00:03:47.800 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:47.800 19:04:25 -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:47.800 OK 00:03:47.800 19:04:25 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:47.800 00:03:47.800 real 0m0.167s 00:03:47.800 user 0m0.142s 00:03:47.800 sys 0m0.103s 00:03:47.800 19:04:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.800 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 ************************************ 00:03:47.800 END TEST rpc_client 00:03:47.800 ************************************ 00:03:47.800 19:04:25 -- spdk/autotest.sh@178 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:47.800 19:04:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:47.800 19:04:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:47.800 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:47.800 ************************************ 00:03:47.800 START TEST json_config 00:03:47.800 ************************************ 00:03:47.800 19:04:25 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.059 19:04:25 -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.059 19:04:25 -- nvmf/common.sh@7 -- # uname -s 00:03:48.059 19:04:25 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:48.059 19:04:25 -- nvmf/common.sh@7 -- # return 0 00:03:48.059 19:04:25 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.059 19:04:25 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.059 19:04:25 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:48.059 19:04:25 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.059 19:04:25 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:48.059 19:04:25 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.059 19:04:25 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:48.059 19:04:25 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:48.059 19:04:25 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:48.059 19:04:25 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:48.059 19:04:25 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.059 INFO: JSON configuration test init 00:03:48.059 19:04:25 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:48.059 19:04:25 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:48.059 19:04:25 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:48.059 19:04:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:48.059 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.059 19:04:25 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:48.059 19:04:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:48.059 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.059 19:04:25 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.059 19:04:25 -- json_config/json_config.sh@98 -- # local app=target 00:03:48.059 19:04:25 -- json_config/json_config.sh@99 -- # shift 00:03:48.059 19:04:25 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:48.059 19:04:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.059 19:04:25 -- json_config/json_config.sh@111 -- # app_pid[$app]=46547 00:03:48.059 19:04:25 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:48.059 Waiting for target to run... 00:03:48.059 19:04:25 -- json_config/json_config.sh@114 -- # waitforlisten 46547 /var/tmp/spdk_tgt.sock 00:03:48.059 19:04:25 -- common/autotest_common.sh@817 -- # '[' -z 46547 ']' 00:03:48.059 19:04:25 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.059 19:04:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.059 19:04:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:48.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.059 19:04:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.059 19:04:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:48.059 19:04:25 -- common/autotest_common.sh@10 -- # set +x 00:03:48.059 [2024-02-14 19:04:25.386682] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:48.059 [2024-02-14 19:04:25.386959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:48.627 EAL: TSC is not safe to use in SMP mode 00:03:48.627 EAL: TSC is not invariant 00:03:48.627 [2024-02-14 19:04:25.771886] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.627 [2024-02-14 19:04:25.882959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:48.627 [2024-02-14 19:04:25.883088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.196 19:04:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:49.196 19:04:26 -- common/autotest_common.sh@850 -- # return 0 00:03:49.196 00:03:49.196 19:04:26 -- json_config/json_config.sh@115 -- # echo '' 00:03:49.196 19:04:26 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:49.196 19:04:26 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:49.196 19:04:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:49.196 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:49.196 19:04:26 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:49.196 19:04:26 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:49.196 19:04:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:49.196 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:49.196 19:04:26 -- json_config/json_config.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:49.196 19:04:26 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:49.196 19:04:26 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:49.454 [2024-02-14 19:04:26.683235] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:49.454 19:04:26 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:49.454 19:04:26 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:49.454 19:04:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:49.454 19:04:26 -- common/autotest_common.sh@10 -- # set +x 00:03:49.454 19:04:26 -- json_config/json_config.sh@48 -- # local ret=0 00:03:49.454 19:04:26 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:49.454 19:04:26 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:49.454 19:04:26 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:49.454 19:04:26 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:49.454 19:04:26 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:49.713 19:04:27 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:49.713 19:04:27 -- json_config/json_config.sh@51 -- # local get_types 00:03:49.713 19:04:27 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:49.713 19:04:27 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:49.713 19:04:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:49.713 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:49.713 19:04:27 -- json_config/json_config.sh@58 -- # return 0 00:03:49.713 19:04:27 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:03:49.713 19:04:27 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:03:49.713 19:04:27 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:03:49.713 19:04:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:49.713 19:04:27 -- common/autotest_common.sh@10 -- # set +x 00:03:49.713 19:04:27 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:03:49.713 19:04:27 -- json_config/json_config.sh@160 -- # local expected_notifications 00:03:49.713 19:04:27 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:03:49.713 19:04:27 -- json_config/json_config.sh@164 -- # get_notifications 00:03:49.713 19:04:27 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:03:49.713 19:04:27 -- json_config/json_config.sh@64 -- # IFS=: 00:03:49.713 19:04:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:49.713 19:04:27 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:03:49.713 19:04:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:49.713 19:04:27 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:49.971 19:04:27 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:03:49.971 19:04:27 -- json_config/json_config.sh@64 -- # IFS=: 00:03:49.971 19:04:27 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:49.971 19:04:27 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:03:49.971 19:04:27 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:03:49.971 19:04:27 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:03:49.971 19:04:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:03:50.230 Nvme0n1p0 Nvme0n1p1 00:03:50.230 19:04:27 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:03:50.230 19:04:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:03:50.488 [2024-02-14 19:04:27.659285] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:50.489 [2024-02-14 19:04:27.659358] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:50.489 00:03:50.489 19:04:27 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:03:50.489 19:04:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:03:50.489 Malloc3 00:03:50.489 19:04:27 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:50.489 19:04:27 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:50.748 [2024-02-14 19:04:28.087310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:50.748 [2024-02-14 19:04:28.087373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.748 [2024-02-14 19:04:28.087406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a55af00 00:03:50.748 [2024-02-14 19:04:28.087413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.748 [2024-02-14 19:04:28.088161] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.748 [2024-02-14 19:04:28.088189] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:50.748 PTBdevFromMalloc3 00:03:50.748 19:04:28 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:03:50.748 19:04:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:03:51.007 Null0 00:03:51.007 19:04:28 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:03:51.007 19:04:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:03:51.266 Malloc0 00:03:51.266 19:04:28 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:03:51.266 19:04:28 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:03:51.525 Malloc1 00:03:51.525 19:04:28 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:03:51.525 19:04:28 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:03:51.783 102400+0 records in 00:03:51.783 102400+0 records out 00:03:51.783 104857600 bytes transferred in 0.285652 secs (367081391 bytes/sec) 00:03:51.783 19:04:29 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:03:51.783 19:04:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:03:52.044 aio_disk 00:03:52.044 19:04:29 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:03:52.044 19:04:29 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:52.044 19:04:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:52.304 e13164cd-cb6b-11ee-af6b-4feeebbbadda 00:03:52.304 19:04:29 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:03:52.304 19:04:29 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:03:52.304 19:04:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:03:52.563 19:04:29 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:03:52.563 19:04:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:03:52.563 19:04:29 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:52.563 19:04:29 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:52.822 19:04:30 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:52.822 19:04:30 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:53.082 19:04:30 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:03:53.082 19:04:30 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:03:53.082 19:04:30 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e15567ed-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1717b96-cb6b-11ee-af6b-4feeebbbadda bdev_register:e18cf329-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1a906fc-cb6b-11ee-af6b-4feeebbbadda 00:03:53.082 19:04:30 -- json_config/json_config.sh@70 -- # local events_to_check 00:03:53.082 19:04:30 -- json_config/json_config.sh@71 -- # local recorded_events 00:03:53.082 19:04:30 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:03:53.082 19:04:30 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e15567ed-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1717b96-cb6b-11ee-af6b-4feeebbbadda bdev_register:e18cf329-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1a906fc-cb6b-11ee-af6b-4feeebbbadda 00:03:53.082 19:04:30 -- json_config/json_config.sh@74 -- # sort 00:03:53.082 19:04:30 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:03:53.082 19:04:30 -- json_config/json_config.sh@75 -- # get_notifications 00:03:53.082 19:04:30 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:03:53.082 19:04:30 -- json_config/json_config.sh@75 -- # sort 00:03:53.082 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.082 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.082 19:04:30 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:53.082 19:04:30 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:03:53.082 19:04:30 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:e15567ed-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:e1717b96-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:e18cf329-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@65 -- # echo bdev_register:e1a906fc-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.341 19:04:30 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.341 19:04:30 -- json_config/json_config.sh@77 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e15567ed-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1717b96-cb6b-11ee-af6b-4feeebbbadda bdev_register:e18cf329-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1a906fc-cb6b-11ee-af6b-4feeebbbadda != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\1\5\5\6\7\e\d\-\c\b\6\b\-\1\1\e\e\-\a\f\6\b\-\4\f\e\e\e\b\b\b\a\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\1\7\1\7\b\9\6\-\c\b\6\b\-\1\1\e\e\-\a\f\6\b\-\4\f\e\e\e\b\b\b\a\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\1\8\c\f\3\2\9\-\c\b\6\b\-\1\1\e\e\-\a\f\6\b\-\4\f\e\e\e\b\b\b\a\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\1\a\9\0\6\f\c\-\c\b\6\b\-\1\1\e\e\-\a\f\6\b\-\4\f\e\e\e\b\b\b\a\d\d\a ]] 00:03:53.341 19:04:30 -- json_config/json_config.sh@89 -- # cat 00:03:53.341 19:04:30 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e15567ed-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1717b96-cb6b-11ee-af6b-4feeebbbadda bdev_register:e18cf329-cb6b-11ee-af6b-4feeebbbadda bdev_register:e1a906fc-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 Expected events matched: 00:03:53.341 bdev_register:Malloc0 00:03:53.341 bdev_register:Malloc0p0 00:03:53.341 bdev_register:Malloc0p1 00:03:53.341 bdev_register:Malloc0p2 00:03:53.341 bdev_register:Malloc1 00:03:53.341 bdev_register:Malloc3 00:03:53.341 bdev_register:Null0 00:03:53.341 bdev_register:Nvme0n1 00:03:53.341 bdev_register:Nvme0n1p0 00:03:53.341 bdev_register:Nvme0n1p1 00:03:53.341 bdev_register:PTBdevFromMalloc3 00:03:53.341 bdev_register:aio_disk 00:03:53.341 bdev_register:e15567ed-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 bdev_register:e1717b96-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 bdev_register:e18cf329-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 bdev_register:e1a906fc-cb6b-11ee-af6b-4feeebbbadda 00:03:53.341 19:04:30 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:03:53.341 19:04:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:53.341 19:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:53.341 19:04:30 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:53.341 19:04:30 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:53.341 19:04:30 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:03:53.341 19:04:30 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:53.341 19:04:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:53.341 19:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:53.341 19:04:30 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:53.342 19:04:30 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.342 19:04:30 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.600 MallocBdevForConfigChangeCheck 00:03:53.600 19:04:30 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:53.600 19:04:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:53.600 19:04:30 -- common/autotest_common.sh@10 -- # set +x 00:03:53.600 19:04:30 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:53.600 19:04:30 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.860 INFO: shutting down applications... 00:03:53.860 19:04:31 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:53.860 19:04:31 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:53.860 19:04:31 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:53.860 19:04:31 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:53.860 19:04:31 -- json_config/json_config.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:53.860 [2024-02-14 19:04:31.251459] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:03:54.119 Calling clear_iscsi_subsystem 00:03:54.119 Calling clear_nvmf_subsystem 00:03:54.119 Calling clear_bdev_subsystem 00:03:54.119 Calling clear_accel_subsystem 00:03:54.119 Calling clear_sock_subsystem 00:03:54.119 Calling clear_scheduler_subsystem 00:03:54.119 Calling clear_iobuf_subsystem 00:03:54.119 Calling clear_vmd_subsystem 00:03:54.119 19:04:31 -- json_config/json_config.sh@390 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:54.119 19:04:31 -- json_config/json_config.sh@396 -- # count=100 00:03:54.119 19:04:31 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:54.119 19:04:31 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.119 19:04:31 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:54.119 19:04:31 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:54.378 19:04:31 -- json_config/json_config.sh@398 -- # break 00:03:54.378 19:04:31 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:54.378 19:04:31 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:54.378 19:04:31 -- json_config/json_config.sh@120 -- # local app=target 00:03:54.378 19:04:31 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:54.378 19:04:31 -- json_config/json_config.sh@124 -- # [[ -n 46547 ]] 00:03:54.378 19:04:31 -- json_config/json_config.sh@127 -- # kill -SIGINT 46547 00:03:54.378 19:04:31 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:54.379 19:04:31 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:54.379 19:04:31 -- json_config/json_config.sh@130 -- # kill -0 46547 00:03:54.379 19:04:31 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:54.947 19:04:32 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:54.947 19:04:32 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:54.947 19:04:32 -- json_config/json_config.sh@130 -- # kill -0 46547 00:03:54.947 19:04:32 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:54.947 19:04:32 -- json_config/json_config.sh@132 -- # break 00:03:54.947 19:04:32 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:54.947 SPDK target shutdown done 00:03:54.947 19:04:32 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:54.947 INFO: relaunching applications... 00:03:54.947 19:04:32 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:54.947 19:04:32 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.947 19:04:32 -- json_config/json_config.sh@98 -- # local app=target 00:03:54.947 19:04:32 -- json_config/json_config.sh@99 -- # shift 00:03:54.947 19:04:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:54.947 19:04:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:54.947 19:04:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:54.947 19:04:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.947 19:04:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.947 19:04:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=46705 00:03:54.947 19:04:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:54.947 Waiting for target to run... 00:03:54.947 19:04:32 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.947 19:04:32 -- json_config/json_config.sh@114 -- # waitforlisten 46705 /var/tmp/spdk_tgt.sock 00:03:54.947 19:04:32 -- common/autotest_common.sh@817 -- # '[' -z 46705 ']' 00:03:54.947 19:04:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:54.947 19:04:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:54.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:54.947 19:04:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:54.947 19:04:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:54.947 19:04:32 -- common/autotest_common.sh@10 -- # set +x 00:03:54.947 [2024-02-14 19:04:32.231883] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:54.947 [2024-02-14 19:04:32.232179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:55.207 EAL: TSC is not safe to use in SMP mode 00:03:55.207 EAL: TSC is not invariant 00:03:55.207 [2024-02-14 19:04:32.600566] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.466 [2024-02-14 19:04:32.713602] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:55.466 [2024-02-14 19:04:32.713727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.466 [2024-02-14 19:04:32.713749] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:03:55.466 [2024-02-14 19:04:32.853890] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:55.466 [2024-02-14 19:04:32.853974] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:55.466 [2024-02-14 19:04:32.861873] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:55.466 [2024-02-14 19:04:32.861894] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:55.466 [2024-02-14 19:04:32.869899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:55.466 [2024-02-14 19:04:32.869922] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:03:55.466 [2024-02-14 19:04:32.869929] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:03:55.466 [2024-02-14 19:04:32.877926] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:55.724 [2024-02-14 19:04:32.945907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:55.724 [2024-02-14 19:04:32.945980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.724 [2024-02-14 19:04:32.946004] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d9da500 00:03:55.724 [2024-02-14 19:04:32.946012] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.724 [2024-02-14 19:04:32.946082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.724 [2024-02-14 19:04:32.946090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:55.982 19:04:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:55.983 00:03:55.983 19:04:33 -- common/autotest_common.sh@850 -- # return 0 00:03:55.983 19:04:33 -- json_config/json_config.sh@115 -- # echo '' 00:03:55.983 19:04:33 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:55.983 INFO: Checking if target configuration is the same... 00:03:55.983 19:04:33 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.983 19:04:33 -- json_config/json_config.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.zpcyNr /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.983 + '[' 2 -ne 2 ']' 00:03:55.983 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:55.983 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:55.983 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:55.983 +++ basename /tmp//sh-np.zpcyNr 00:03:55.983 ++ mktemp /tmp/sh-np.zpcyNr.XXX 00:03:55.983 + tmp_file_1=/tmp/sh-np.zpcyNr.gy7 00:03:55.983 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.983 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.983 + tmp_file_2=/tmp/spdk_tgt_config.json.djp 00:03:55.983 + ret=0 00:03:55.983 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:55.983 19:04:33 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:55.983 19:04:33 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.241 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.500 + diff -u /tmp/sh-np.zpcyNr.gy7 /tmp/spdk_tgt_config.json.djp 00:03:56.500 + echo 'INFO: JSON config files are the same' 00:03:56.500 INFO: JSON config files are the same 00:03:56.500 + rm /tmp/sh-np.zpcyNr.gy7 /tmp/spdk_tgt_config.json.djp 00:03:56.500 + exit 0 00:03:56.500 19:04:33 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:56.500 INFO: changing configuration and checking if this can be detected... 00:03:56.500 19:04:33 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.500 19:04:33 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.500 19:04:33 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.500 19:04:33 -- json_config/json_config.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.d9Jckj /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.500 + '[' 2 -ne 2 ']' 00:03:56.500 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:56.500 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:56.500 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:56.500 +++ basename /tmp//sh-np.d9Jckj 00:03:56.501 ++ mktemp /tmp/sh-np.d9Jckj.XXX 00:03:56.759 + tmp_file_1=/tmp/sh-np.d9Jckj.Qu2 00:03:56.759 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.759 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.759 + tmp_file_2=/tmp/spdk_tgt_config.json.S7w 00:03:56.759 + ret=0 00:03:56.759 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.759 19:04:33 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:56.759 19:04:33 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:57.019 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:57.019 + diff -u /tmp/sh-np.d9Jckj.Qu2 /tmp/spdk_tgt_config.json.S7w 00:03:57.019 + ret=1 00:03:57.019 + echo '=== Start of file: /tmp/sh-np.d9Jckj.Qu2 ===' 00:03:57.019 + cat /tmp/sh-np.d9Jckj.Qu2 00:03:57.019 + echo '=== End of file: /tmp/sh-np.d9Jckj.Qu2 ===' 00:03:57.019 + echo '' 00:03:57.019 + echo '=== Start of file: /tmp/spdk_tgt_config.json.S7w ===' 00:03:57.019 + cat /tmp/spdk_tgt_config.json.S7w 00:03:57.019 + echo '=== End of file: /tmp/spdk_tgt_config.json.S7w ===' 00:03:57.019 + echo '' 00:03:57.019 + rm /tmp/sh-np.d9Jckj.Qu2 /tmp/spdk_tgt_config.json.S7w 00:03:57.019 + exit 1 00:03:57.019 INFO: configuration change detected. 00:03:57.019 19:04:34 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:57.019 19:04:34 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:57.019 19:04:34 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:57.019 19:04:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.019 19:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.019 19:04:34 -- json_config/json_config.sh@360 -- # local ret=0 00:03:57.019 19:04:34 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:57.019 19:04:34 -- json_config/json_config.sh@370 -- # [[ -n 46705 ]] 00:03:57.019 19:04:34 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:57.019 19:04:34 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:57.019 19:04:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:57.019 19:04:34 -- common/autotest_common.sh@10 -- # set +x 00:03:57.019 19:04:34 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:03:57.019 19:04:34 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:03:57.019 19:04:34 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:03:57.277 19:04:34 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:03:57.277 19:04:34 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:03:57.536 19:04:34 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:03:57.536 19:04:34 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:03:57.795 19:04:35 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:03:57.795 19:04:35 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:03:57.795 19:04:35 -- json_config/json_config.sh@246 -- # uname -s 00:03:58.055 19:04:35 -- json_config/json_config.sh@246 -- # [[ FreeBSD = Linux ]] 00:03:58.055 19:04:35 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:58.055 19:04:35 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:58.055 19:04:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:58.055 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.055 19:04:35 -- json_config/json_config.sh@376 -- # killprocess 46705 00:03:58.055 19:04:35 -- common/autotest_common.sh@924 -- # '[' -z 46705 ']' 00:03:58.055 19:04:35 -- common/autotest_common.sh@928 -- # kill -0 46705 00:03:58.055 19:04:35 -- common/autotest_common.sh@929 -- # uname 00:03:58.055 19:04:35 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:03:58.055 19:04:35 -- common/autotest_common.sh@932 -- # ps -c -o command 46705 00:03:58.055 19:04:35 -- common/autotest_common.sh@932 -- # tail -1 00:03:58.055 19:04:35 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:03:58.055 killing process with pid 46705 00:03:58.055 19:04:35 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:03:58.055 19:04:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 46705' 00:03:58.055 19:04:35 -- common/autotest_common.sh@943 -- # kill 46705 00:03:58.055 [2024-02-14 19:04:35.256216] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:03:58.055 19:04:35 -- common/autotest_common.sh@948 -- # wait 46705 00:03:58.314 19:04:35 -- json_config/json_config.sh@379 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:58.314 19:04:35 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:58.314 19:04:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:58.314 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.314 19:04:35 -- json_config/json_config.sh@381 -- # return 0 00:03:58.314 INFO: Success 00:03:58.314 19:04:35 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:58.314 00:03:58.314 real 0m10.452s 00:03:58.314 user 0m15.769s 00:03:58.314 sys 0m2.450s 00:03:58.314 19:04:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:58.314 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.314 ************************************ 00:03:58.314 END TEST json_config 00:03:58.314 ************************************ 00:03:58.314 19:04:35 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:58.314 19:04:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:03:58.314 19:04:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:03:58.314 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.314 ************************************ 00:03:58.314 START TEST json_config_extra_key 00:03:58.314 ************************************ 00:03:58.314 19:04:35 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:58.574 19:04:35 -- nvmf/common.sh@7 -- # uname -s 00:03:58.574 19:04:35 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:58.574 19:04:35 -- nvmf/common.sh@7 -- # return 0 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.574 INFO: launching applications... 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=46821 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:58.574 Waiting for target to run... 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 46821 /var/tmp/spdk_tgt.sock 00:03:58.574 19:04:35 -- json_config/json_config_extra_key.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:58.574 19:04:35 -- common/autotest_common.sh@817 -- # '[' -z 46821 ']' 00:03:58.574 19:04:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.574 19:04:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:58.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.574 19:04:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.574 19:04:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:58.574 19:04:35 -- common/autotest_common.sh@10 -- # set +x 00:03:58.574 [2024-02-14 19:04:35.865206] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:03:58.574 [2024-02-14 19:04:35.865377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:58.833 EAL: TSC is not safe to use in SMP mode 00:03:58.833 EAL: TSC is not invariant 00:03:58.833 [2024-02-14 19:04:36.232200] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.092 [2024-02-14 19:04:36.346188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:59.092 [2024-02-14 19:04:36.346309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.092 [2024-02-14 19:04:36.346329] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:03:59.660 19:04:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:59.660 19:04:36 -- common/autotest_common.sh@850 -- # return 0 00:03:59.660 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:59.660 INFO: shutting down applications... 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 46821 ]] 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 46821 00:03:59.660 [2024-02-14 19:04:36.936241] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@50 -- # kill -0 46821 00:03:59.660 19:04:36 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 46821 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:00.228 SPDK target shutdown done 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:00.228 Success 00:04:00.228 19:04:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:00.228 00:04:00.228 real 0m1.787s 00:04:00.228 user 0m1.568s 00:04:00.228 sys 0m0.626s 00:04:00.228 19:04:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.228 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.228 ************************************ 00:04:00.228 END TEST json_config_extra_key 00:04:00.228 ************************************ 00:04:00.228 19:04:37 -- spdk/autotest.sh@180 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.228 19:04:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:00.228 19:04:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:00.228 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.228 ************************************ 00:04:00.228 START TEST alias_rpc 00:04:00.228 ************************************ 00:04:00.228 19:04:37 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.487 * Looking for test storage... 00:04:00.487 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:00.487 19:04:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.487 19:04:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46870 00:04:00.487 19:04:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46870 00:04:00.487 19:04:37 -- common/autotest_common.sh@817 -- # '[' -z 46870 ']' 00:04:00.487 19:04:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.487 19:04:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:00.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.487 19:04:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.487 19:04:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:00.487 19:04:37 -- common/autotest_common.sh@10 -- # set +x 00:04:00.487 19:04:37 -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:00.487 [2024-02-14 19:04:37.729883] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:00.487 [2024-02-14 19:04:37.730083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:01.060 EAL: TSC is not safe to use in SMP mode 00:04:01.060 EAL: TSC is not invariant 00:04:01.060 [2024-02-14 19:04:38.469425] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.319 [2024-02-14 19:04:38.582269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:01.319 [2024-02-14 19:04:38.582394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.578 19:04:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:01.578 19:04:38 -- common/autotest_common.sh@850 -- # return 0 00:04:01.578 19:04:38 -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:01.837 19:04:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46870 00:04:01.837 19:04:39 -- common/autotest_common.sh@924 -- # '[' -z 46870 ']' 00:04:01.837 19:04:39 -- common/autotest_common.sh@928 -- # kill -0 46870 00:04:01.837 19:04:39 -- common/autotest_common.sh@929 -- # uname 00:04:01.837 19:04:39 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:04:01.838 19:04:39 -- common/autotest_common.sh@932 -- # ps -c -o command 46870 00:04:01.838 19:04:39 -- common/autotest_common.sh@932 -- # tail -1 00:04:01.838 19:04:39 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:04:01.838 killing process with pid 46870 00:04:01.838 19:04:39 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:04:01.838 19:04:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 46870' 00:04:01.838 19:04:39 -- common/autotest_common.sh@943 -- # kill 46870 00:04:01.838 19:04:39 -- common/autotest_common.sh@948 -- # wait 46870 00:04:02.098 00:04:02.098 real 0m1.951s 00:04:02.098 user 0m1.733s 00:04:02.098 sys 0m1.103s 00:04:02.098 19:04:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.098 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.098 ************************************ 00:04:02.098 END TEST alias_rpc 00:04:02.098 ************************************ 00:04:02.098 19:04:39 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:02.098 19:04:39 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:02.098 19:04:39 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:02.098 19:04:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:02.098 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.357 ************************************ 00:04:02.357 START TEST spdkcli_tcp 00:04:02.357 ************************************ 00:04:02.357 19:04:39 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:02.357 * Looking for test storage... 00:04:02.357 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:02.357 19:04:39 -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:02.357 19:04:39 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:02.357 19:04:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:02.357 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46926 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:02.357 19:04:39 -- spdkcli/tcp.sh@27 -- # waitforlisten 46926 00:04:02.357 19:04:39 -- common/autotest_common.sh@817 -- # '[' -z 46926 ']' 00:04:02.357 19:04:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.357 19:04:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:02.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.357 19:04:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.357 19:04:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:02.357 19:04:39 -- common/autotest_common.sh@10 -- # set +x 00:04:02.357 [2024-02-14 19:04:39.695076] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:02.357 [2024-02-14 19:04:39.695322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:03.293 EAL: TSC is not safe to use in SMP mode 00:04:03.293 EAL: TSC is not invariant 00:04:03.293 [2024-02-14 19:04:40.478141] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.293 [2024-02-14 19:04:40.596086] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.293 [2024-02-14 19:04:40.597586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.293 [2024-02-14 19:04:40.597437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.293 19:04:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:03.293 19:04:40 -- common/autotest_common.sh@850 -- # return 0 00:04:03.293 19:04:40 -- spdkcli/tcp.sh@31 -- # socat_pid=46930 00:04:03.293 19:04:40 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:03.293 19:04:40 -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:03.859 [ 00:04:03.859 "spdk_get_version", 00:04:03.859 "rpc_get_methods", 00:04:03.859 "env_dpdk_get_mem_stats", 00:04:03.859 "trace_get_info", 00:04:03.859 "trace_get_tpoint_group_mask", 00:04:03.859 "trace_disable_tpoint_group", 00:04:03.859 "trace_enable_tpoint_group", 00:04:03.859 "trace_clear_tpoint_mask", 00:04:03.859 "trace_set_tpoint_mask", 00:04:03.859 "notify_get_notifications", 00:04:03.859 "notify_get_types", 00:04:03.859 "accel_get_stats", 00:04:03.859 "accel_set_options", 00:04:03.859 "accel_set_driver", 00:04:03.859 "accel_crypto_key_destroy", 00:04:03.859 "accel_crypto_keys_get", 00:04:03.859 "accel_crypto_key_create", 00:04:03.859 "accel_assign_opc", 00:04:03.859 "accel_get_module_info", 00:04:03.859 "accel_get_opc_assignments", 00:04:03.859 "bdev_get_histogram", 00:04:03.859 "bdev_enable_histogram", 00:04:03.859 "bdev_set_qos_limit", 00:04:03.859 "bdev_set_qd_sampling_period", 00:04:03.859 "bdev_get_bdevs", 00:04:03.859 "bdev_reset_iostat", 00:04:03.859 "bdev_get_iostat", 00:04:03.859 "bdev_examine", 00:04:03.859 "bdev_wait_for_examine", 00:04:03.859 "bdev_set_options", 00:04:03.859 "sock_set_default_impl", 00:04:03.859 "sock_impl_set_options", 00:04:03.859 "sock_impl_get_options", 00:04:03.859 "framework_get_pci_devices", 00:04:03.859 "framework_get_config", 00:04:03.859 "framework_get_subsystems", 00:04:03.859 "thread_set_cpumask", 00:04:03.859 "framework_get_scheduler", 00:04:03.859 "framework_set_scheduler", 00:04:03.859 "framework_get_reactors", 00:04:03.859 "thread_get_io_channels", 00:04:03.859 "thread_get_pollers", 00:04:03.859 "thread_get_stats", 00:04:03.859 "framework_monitor_context_switch", 00:04:03.859 "spdk_kill_instance", 00:04:03.859 "log_enable_timestamps", 00:04:03.859 "log_get_flags", 00:04:03.859 "log_clear_flag", 00:04:03.859 "log_set_flag", 00:04:03.859 "log_get_level", 00:04:03.859 "log_set_level", 00:04:03.859 "log_get_print_level", 00:04:03.859 "log_set_print_level", 00:04:03.859 "framework_enable_cpumask_locks", 00:04:03.859 "framework_disable_cpumask_locks", 00:04:03.859 "framework_wait_init", 00:04:03.859 "framework_start_init", 00:04:03.859 "iobuf_get_stats", 00:04:03.859 "iobuf_set_options", 00:04:03.859 "vmd_rescan", 00:04:03.859 "vmd_remove_device", 00:04:03.859 "vmd_enable", 00:04:03.859 "nvmf_subsystem_get_listeners", 00:04:03.859 "nvmf_subsystem_get_qpairs", 00:04:03.859 "nvmf_subsystem_get_controllers", 00:04:03.859 "nvmf_get_stats", 00:04:03.859 "nvmf_get_transports", 00:04:03.859 "nvmf_create_transport", 00:04:03.859 "nvmf_get_targets", 00:04:03.859 "nvmf_delete_target", 00:04:03.859 "nvmf_create_target", 00:04:03.859 "nvmf_subsystem_allow_any_host", 00:04:03.859 "nvmf_subsystem_remove_host", 00:04:03.859 "nvmf_subsystem_add_host", 00:04:03.859 "nvmf_subsystem_remove_ns", 00:04:03.859 "nvmf_subsystem_add_ns", 00:04:03.859 "nvmf_subsystem_listener_set_ana_state", 00:04:03.860 "nvmf_discovery_get_referrals", 00:04:03.860 "nvmf_discovery_remove_referral", 00:04:03.860 "nvmf_discovery_add_referral", 00:04:03.860 "nvmf_subsystem_remove_listener", 00:04:03.860 "nvmf_subsystem_add_listener", 00:04:03.860 "nvmf_delete_subsystem", 00:04:03.860 "nvmf_create_subsystem", 00:04:03.860 "nvmf_get_subsystems", 00:04:03.860 "nvmf_set_crdt", 00:04:03.860 "nvmf_set_config", 00:04:03.860 "nvmf_set_max_subsystems", 00:04:03.860 "scsi_get_devices", 00:04:03.860 "iscsi_set_options", 00:04:03.860 "iscsi_get_auth_groups", 00:04:03.860 "iscsi_auth_group_remove_secret", 00:04:03.860 "iscsi_auth_group_add_secret", 00:04:03.860 "iscsi_delete_auth_group", 00:04:03.860 "iscsi_create_auth_group", 00:04:03.860 "iscsi_set_discovery_auth", 00:04:03.860 "iscsi_get_options", 00:04:03.860 "iscsi_target_node_request_logout", 00:04:03.860 "iscsi_target_node_set_redirect", 00:04:03.860 "iscsi_target_node_set_auth", 00:04:03.860 "iscsi_target_node_add_lun", 00:04:03.860 "iscsi_get_connections", 00:04:03.860 "iscsi_portal_group_set_auth", 00:04:03.860 "iscsi_start_portal_group", 00:04:03.860 "iscsi_delete_portal_group", 00:04:03.860 "iscsi_create_portal_group", 00:04:03.860 "iscsi_get_portal_groups", 00:04:03.860 "iscsi_delete_target_node", 00:04:03.860 "iscsi_target_node_remove_pg_ig_maps", 00:04:03.860 "iscsi_target_node_add_pg_ig_maps", 00:04:03.860 "iscsi_create_target_node", 00:04:03.860 "iscsi_get_target_nodes", 00:04:03.860 "iscsi_delete_initiator_group", 00:04:03.860 "iscsi_initiator_group_remove_initiators", 00:04:03.860 "iscsi_initiator_group_add_initiators", 00:04:03.860 "iscsi_create_initiator_group", 00:04:03.860 "iscsi_get_initiator_groups", 00:04:03.860 "iaa_scan_accel_module", 00:04:03.860 "dsa_scan_accel_module", 00:04:03.860 "ioat_scan_accel_module", 00:04:03.860 "accel_error_inject_error", 00:04:03.860 "bdev_aio_delete", 00:04:03.860 "bdev_aio_rescan", 00:04:03.860 "bdev_aio_create", 00:04:03.860 "blobfs_create", 00:04:03.860 "blobfs_detect", 00:04:03.860 "blobfs_set_cache_size", 00:04:03.860 "bdev_zone_block_delete", 00:04:03.860 "bdev_zone_block_create", 00:04:03.860 "bdev_delay_delete", 00:04:03.860 "bdev_delay_create", 00:04:03.860 "bdev_delay_update_latency", 00:04:03.860 "bdev_split_delete", 00:04:03.860 "bdev_split_create", 00:04:03.860 "bdev_error_inject_error", 00:04:03.860 "bdev_error_delete", 00:04:03.860 "bdev_error_create", 00:04:03.860 "bdev_raid_set_options", 00:04:03.860 "bdev_raid_remove_base_bdev", 00:04:03.860 "bdev_raid_add_base_bdev", 00:04:03.860 "bdev_raid_delete", 00:04:03.860 "bdev_raid_create", 00:04:03.860 "bdev_raid_get_bdevs", 00:04:03.860 "bdev_lvol_grow_lvstore", 00:04:03.860 "bdev_lvol_get_lvols", 00:04:03.860 "bdev_lvol_get_lvstores", 00:04:03.860 "bdev_lvol_delete", 00:04:03.860 "bdev_lvol_set_read_only", 00:04:03.860 "bdev_lvol_resize", 00:04:03.860 "bdev_lvol_decouple_parent", 00:04:03.860 "bdev_lvol_inflate", 00:04:03.860 "bdev_lvol_rename", 00:04:03.860 "bdev_lvol_clone_bdev", 00:04:03.860 "bdev_lvol_clone", 00:04:03.860 "bdev_lvol_snapshot", 00:04:03.860 "bdev_lvol_create", 00:04:03.860 "bdev_lvol_delete_lvstore", 00:04:03.860 "bdev_lvol_rename_lvstore", 00:04:03.860 "bdev_lvol_create_lvstore", 00:04:03.860 "bdev_passthru_delete", 00:04:03.860 "bdev_passthru_create", 00:04:03.860 "bdev_nvme_send_cmd", 00:04:03.860 "bdev_nvme_get_path_iostat", 00:04:03.860 "bdev_nvme_get_mdns_discovery_info", 00:04:03.860 "bdev_nvme_stop_mdns_discovery", 00:04:03.860 "bdev_nvme_start_mdns_discovery", 00:04:03.860 "bdev_nvme_set_multipath_policy", 00:04:03.860 "bdev_nvme_set_preferred_path", 00:04:03.860 "bdev_nvme_get_io_paths", 00:04:03.860 "bdev_nvme_remove_error_injection", 00:04:03.860 "bdev_nvme_add_error_injection", 00:04:03.860 "bdev_nvme_get_discovery_info", 00:04:03.860 "bdev_nvme_stop_discovery", 00:04:03.860 "bdev_nvme_start_discovery", 00:04:03.860 "bdev_nvme_get_controller_health_info", 00:04:03.860 "bdev_nvme_disable_controller", 00:04:03.860 "bdev_nvme_enable_controller", 00:04:03.860 "bdev_nvme_reset_controller", 00:04:03.860 "bdev_nvme_get_transport_statistics", 00:04:03.860 "bdev_nvme_apply_firmware", 00:04:03.860 "bdev_nvme_detach_controller", 00:04:03.860 "bdev_nvme_get_controllers", 00:04:03.860 "bdev_nvme_attach_controller", 00:04:03.860 "bdev_nvme_set_hotplug", 00:04:03.860 "bdev_nvme_set_options", 00:04:03.860 "bdev_null_resize", 00:04:03.860 "bdev_null_delete", 00:04:03.860 "bdev_null_create", 00:04:03.860 "bdev_malloc_delete", 00:04:03.860 "bdev_malloc_create" 00:04:03.860 ] 00:04:03.860 19:04:41 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:03.860 19:04:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:03.860 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:03.860 19:04:41 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:03.860 19:04:41 -- spdkcli/tcp.sh@38 -- # killprocess 46926 00:04:03.860 19:04:41 -- common/autotest_common.sh@924 -- # '[' -z 46926 ']' 00:04:03.860 19:04:41 -- common/autotest_common.sh@928 -- # kill -0 46926 00:04:03.860 19:04:41 -- common/autotest_common.sh@929 -- # uname 00:04:03.860 19:04:41 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:04:03.860 19:04:41 -- common/autotest_common.sh@932 -- # ps -c -o command 46926 00:04:03.860 19:04:41 -- common/autotest_common.sh@932 -- # tail -1 00:04:03.860 19:04:41 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:04:03.860 19:04:41 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:04:03.860 killing process with pid 46926 00:04:03.860 19:04:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 46926' 00:04:03.860 19:04:41 -- common/autotest_common.sh@943 -- # kill 46926 00:04:03.860 19:04:41 -- common/autotest_common.sh@948 -- # wait 46926 00:04:04.118 00:04:04.118 real 0m1.927s 00:04:04.118 user 0m2.498s 00:04:04.118 sys 0m1.088s 00:04:04.118 19:04:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.118 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.118 ************************************ 00:04:04.118 END TEST spdkcli_tcp 00:04:04.118 ************************************ 00:04:04.118 19:04:41 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:04.118 19:04:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:04.118 19:04:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:04.118 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.118 ************************************ 00:04:04.118 START TEST dpdk_mem_utility 00:04:04.118 ************************************ 00:04:04.118 19:04:41 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:04.377 * Looking for test storage... 00:04:04.377 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:04.377 19:04:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:04.377 19:04:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46996 00:04:04.377 19:04:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46996 00:04:04.377 19:04:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:04.377 19:04:41 -- common/autotest_common.sh@817 -- # '[' -z 46996 ']' 00:04:04.377 19:04:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.377 19:04:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:04.377 19:04:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.377 19:04:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:04.377 19:04:41 -- common/autotest_common.sh@10 -- # set +x 00:04:04.377 [2024-02-14 19:04:41.650933] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:04.377 [2024-02-14 19:04:41.651261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:05.312 EAL: TSC is not safe to use in SMP mode 00:04:05.312 EAL: TSC is not invariant 00:04:05.312 [2024-02-14 19:04:42.396840] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.312 [2024-02-14 19:04:42.533728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:05.312 [2024-02-14 19:04:42.533876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.312 19:04:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:05.312 19:04:42 -- common/autotest_common.sh@850 -- # return 0 00:04:05.312 19:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:05.312 19:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:05.312 19:04:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:05.312 19:04:42 -- common/autotest_common.sh@10 -- # set +x 00:04:05.312 { 00:04:05.312 "filename": "/tmp/spdk_mem_dump.txt" 00:04:05.312 } 00:04:05.312 19:04:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:05.312 19:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:05.570 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:05.570 1 heaps totaling size 2048.000000 MiB 00:04:05.570 size: 2048.000000 MiB heap id: 0 00:04:05.570 end heaps---------- 00:04:05.570 8 mempools totaling size 592.563660 MiB 00:04:05.570 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:05.570 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:05.570 size: 84.500549 MiB name: bdev_io_46996 00:04:05.570 size: 51.008362 MiB name: evtpool_46996 00:04:05.570 size: 50.000549 MiB name: msgpool_46996 00:04:05.570 size: 21.758911 MiB name: PDU_Pool 00:04:05.570 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:05.570 size: 0.026123 MiB name: Session_Pool 00:04:05.570 end mempools------- 00:04:05.570 6 memzones totaling size 4.142822 MiB 00:04:05.570 size: 1.000366 MiB name: RG_ring_0_46996 00:04:05.570 size: 1.000366 MiB name: RG_ring_1_46996 00:04:05.570 size: 1.000366 MiB name: RG_ring_4_46996 00:04:05.570 size: 1.000366 MiB name: RG_ring_5_46996 00:04:05.570 size: 0.125366 MiB name: RG_ring_2_46996 00:04:05.570 size: 0.015991 MiB name: RG_ring_3_46996 00:04:05.570 end memzones------- 00:04:05.570 19:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:05.570 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:04:05.570 list of free elements. size: 1254.071899 MiB 00:04:05.570 element at address: 0x1060000000 with size: 1254.001099 MiB 00:04:05.570 element at address: 0x10c8000000 with size: 0.070129 MiB 00:04:05.570 element at address: 0x10d98b6000 with size: 0.000671 MiB 00:04:05.570 list of standard malloc elements. size: 197.217957 MiB 00:04:05.570 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:04:05.570 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:04:05.570 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:04:05.570 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:04:05.571 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:04:05.571 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:04:05.571 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:04:05.571 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:04:05.571 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b65c0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b6680 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:04:05.571 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:04:05.571 list of memzone associated elements. size: 596.710144 MiB 00:04:05.571 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:04:05.571 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:05.571 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:04:05.571 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:05.571 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:04:05.571 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46996_0 00:04:05.571 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:04:05.571 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46996_0 00:04:05.571 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:04:05.571 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46996_0 00:04:05.571 element at address: 0x10c683d780 with size: 20.250671 MiB 00:04:05.571 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:05.571 element at address: 0x10ae700680 with size: 18.000671 MiB 00:04:05.571 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:05.571 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:04:05.571 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46996 00:04:05.571 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:04:05.571 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46996 00:04:05.571 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:04:05.571 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46996 00:04:05.571 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:04:05.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:05.571 element at address: 0x10c673b640 with size: 1.008118 MiB 00:04:05.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:05.571 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:04:05.571 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:05.571 element at address: 0x10af980b40 with size: 1.008118 MiB 00:04:05.571 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:05.571 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:04:05.571 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46996 00:04:05.571 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:04:05.571 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46996 00:04:05.571 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:04:05.571 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46996 00:04:05.571 element at address: 0x10ae600480 with size: 1.000488 MiB 00:04:05.571 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46996 00:04:05.571 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:04:05.571 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46996 00:04:05.571 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:04:05.571 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:05.571 element at address: 0x10af900940 with size: 0.500488 MiB 00:04:05.571 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:05.571 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:04:05.571 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:05.571 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:04:05.571 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46996 00:04:05.571 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:04:05.571 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:05.571 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:04:05.571 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:05.571 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:04:05.571 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46996 00:04:05.571 element at address: 0x10c8018080 with size: 0.002441 MiB 00:04:05.571 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:05.571 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:04:05.571 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46996 00:04:05.571 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:04:05.571 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46996 00:04:05.571 element at address: 0x10d98b6740 with size: 0.000305 MiB 00:04:05.571 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:05.571 19:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:05.571 19:04:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46996 00:04:05.571 19:04:42 -- common/autotest_common.sh@924 -- # '[' -z 46996 ']' 00:04:05.571 19:04:42 -- common/autotest_common.sh@928 -- # kill -0 46996 00:04:05.571 19:04:42 -- common/autotest_common.sh@929 -- # uname 00:04:05.571 19:04:42 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:04:05.571 19:04:42 -- common/autotest_common.sh@932 -- # ps -c -o command 46996 00:04:05.571 19:04:42 -- common/autotest_common.sh@932 -- # tail -1 00:04:05.571 19:04:42 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:04:05.571 19:04:42 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:04:05.571 killing process with pid 46996 00:04:05.571 19:04:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 46996' 00:04:05.571 19:04:42 -- common/autotest_common.sh@943 -- # kill 46996 00:04:05.571 19:04:42 -- common/autotest_common.sh@948 -- # wait 46996 00:04:05.829 00:04:05.829 real 0m1.719s 00:04:05.829 user 0m1.470s 00:04:05.829 sys 0m0.995s 00:04:05.829 19:04:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.829 ************************************ 00:04:05.829 END TEST dpdk_mem_utility 00:04:05.830 ************************************ 00:04:05.830 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:05.830 19:04:43 -- spdk/autotest.sh@187 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:05.830 19:04:43 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:05.830 19:04:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:05.830 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.087 ************************************ 00:04:06.088 START TEST event 00:04:06.088 ************************************ 00:04:06.088 19:04:43 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:06.088 * Looking for test storage... 00:04:06.088 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:04:06.088 19:04:43 -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:06.088 19:04:43 -- bdev/nbd_common.sh@6 -- # set -e 00:04:06.088 19:04:43 -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.088 19:04:43 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:04:06.088 19:04:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:06.088 19:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:06.088 ************************************ 00:04:06.088 START TEST event_perf 00:04:06.088 ************************************ 00:04:06.088 19:04:43 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.088 Running I/O for 1 seconds...[2024-02-14 19:04:43.441460] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:06.088 [2024-02-14 19:04:43.441788] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:07.023 EAL: TSC is not safe to use in SMP mode 00:04:07.023 EAL: TSC is not invariant 00:04:07.023 [2024-02-14 19:04:44.204933] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:07.023 [2024-02-14 19:04:44.326096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:07.023 [2024-02-14 19:04:44.326323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.023 [2024-02-14 19:04:44.326162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:07.023 [2024-02-14 19:04:44.326319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:08.397 Running I/O for 1 seconds... 00:04:08.397 lcore 0: 2332133 00:04:08.397 lcore 1: 2332133 00:04:08.397 lcore 2: 2332132 00:04:08.397 lcore 3: 2332133 00:04:08.397 done. 00:04:08.397 00:04:08.397 real 0m2.041s 00:04:08.397 user 0m4.215s 00:04:08.397 sys 0m0.823s 00:04:08.397 19:04:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.397 ************************************ 00:04:08.397 END TEST event_perf 00:04:08.397 ************************************ 00:04:08.397 19:04:45 -- common/autotest_common.sh@10 -- # set +x 00:04:08.397 19:04:45 -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:08.397 19:04:45 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:04:08.397 19:04:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:08.397 19:04:45 -- common/autotest_common.sh@10 -- # set +x 00:04:08.397 ************************************ 00:04:08.397 START TEST event_reactor 00:04:08.397 ************************************ 00:04:08.397 19:04:45 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:08.397 [2024-02-14 19:04:45.527738] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:08.397 [2024-02-14 19:04:45.528089] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:08.963 EAL: TSC is not safe to use in SMP mode 00:04:08.963 EAL: TSC is not invariant 00:04:08.963 [2024-02-14 19:04:46.285580] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.222 [2024-02-14 19:04:46.406391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.156 test_start 00:04:10.156 oneshot 00:04:10.156 tick 100 00:04:10.156 tick 100 00:04:10.157 tick 250 00:04:10.157 tick 100 00:04:10.157 tick 100 00:04:10.157 tick 100 00:04:10.157 tick 250 00:04:10.157 tick 500 00:04:10.157 tick 100 00:04:10.157 tick 100 00:04:10.157 tick 250 00:04:10.157 tick 100 00:04:10.157 tick 100 00:04:10.157 test_end 00:04:10.157 00:04:10.157 real 0m2.032s 00:04:10.157 user 0m1.218s 00:04:10.157 sys 0m0.812s 00:04:10.157 19:04:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:10.157 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.157 ************************************ 00:04:10.157 END TEST event_reactor 00:04:10.157 ************************************ 00:04:10.414 19:04:47 -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.414 19:04:47 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:04:10.414 19:04:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:10.414 19:04:47 -- common/autotest_common.sh@10 -- # set +x 00:04:10.414 ************************************ 00:04:10.414 START TEST event_reactor_perf 00:04:10.414 ************************************ 00:04:10.414 19:04:47 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:10.414 [2024-02-14 19:04:47.604770] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:10.414 [2024-02-14 19:04:47.605007] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:10.980 EAL: TSC is not safe to use in SMP mode 00:04:10.980 EAL: TSC is not invariant 00:04:10.980 [2024-02-14 19:04:48.368484] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.239 [2024-02-14 19:04:48.484797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.614 test_start 00:04:12.615 test_end 00:04:12.615 Performance: 4169761 events per second 00:04:12.615 00:04:12.615 real 0m2.031s 00:04:12.615 user 0m1.219s 00:04:12.615 sys 0m0.810s 00:04:12.615 19:04:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.615 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:12.615 ************************************ 00:04:12.615 END TEST event_reactor_perf 00:04:12.615 ************************************ 00:04:12.615 19:04:49 -- event/event.sh@49 -- # uname -s 00:04:12.615 19:04:49 -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:12.615 00:04:12.615 real 0m6.428s 00:04:12.615 user 0m6.817s 00:04:12.615 sys 0m2.679s 00:04:12.615 19:04:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.615 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:12.615 ************************************ 00:04:12.615 END TEST event 00:04:12.615 ************************************ 00:04:12.615 19:04:49 -- spdk/autotest.sh@188 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:12.615 19:04:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:12.615 19:04:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:12.615 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:12.615 ************************************ 00:04:12.615 START TEST thread 00:04:12.615 ************************************ 00:04:12.615 19:04:49 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:12.615 * Looking for test storage... 00:04:12.615 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:04:12.615 19:04:49 -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:12.615 19:04:49 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:04:12.615 19:04:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:12.615 19:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:12.615 ************************************ 00:04:12.615 START TEST thread_poller_perf 00:04:12.615 ************************************ 00:04:12.615 19:04:49 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:12.615 [2024-02-14 19:04:49.906975] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:12.615 [2024-02-14 19:04:49.907261] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:13.550 EAL: TSC is not safe to use in SMP mode 00:04:13.550 EAL: TSC is not invariant 00:04:13.550 [2024-02-14 19:04:50.688153] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.550 [2024-02-14 19:04:50.804991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.550 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:14.930 ====================================== 00:04:14.930 busy:2102462706 (cyc) 00:04:14.930 total_run_count: 6410000 00:04:14.930 tsc_hz: 2100001353 (cyc) 00:04:14.930 ====================================== 00:04:14.930 poller_cost: 327 (cyc), 155 (nsec) 00:04:14.930 00:04:14.930 real 0m2.051s 00:04:14.930 user 0m1.202s 00:04:14.930 sys 0m0.846s 00:04:14.930 19:04:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.930 19:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:14.930 ************************************ 00:04:14.930 END TEST thread_poller_perf 00:04:14.930 ************************************ 00:04:14.930 19:04:51 -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:14.930 19:04:51 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:04:14.930 19:04:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:14.930 19:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:14.930 ************************************ 00:04:14.930 START TEST thread_poller_perf 00:04:14.930 ************************************ 00:04:14.930 19:04:51 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:14.930 [2024-02-14 19:04:52.004039] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:14.930 [2024-02-14 19:04:52.004276] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:15.499 EAL: TSC is not safe to use in SMP mode 00:04:15.499 EAL: TSC is not invariant 00:04:15.499 [2024-02-14 19:04:52.769620] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.499 [2024-02-14 19:04:52.884364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.499 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:16.885 ====================================== 00:04:16.885 busy:2101816904 (cyc) 00:04:16.885 total_run_count: 87124000 00:04:16.885 tsc_hz: 2100001353 (cyc) 00:04:16.885 ====================================== 00:04:16.885 poller_cost: 24 (cyc), 11 (nsec) 00:04:16.885 00:04:16.885 real 0m2.032s 00:04:16.885 user 0m1.205s 00:04:16.885 sys 0m0.819s 00:04:16.885 19:04:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.885 19:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:16.885 ************************************ 00:04:16.885 END TEST thread_poller_perf 00:04:16.885 ************************************ 00:04:16.885 19:04:54 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:16.885 19:04:54 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:16.885 19:04:54 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:16.885 19:04:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:16.885 19:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:16.885 ************************************ 00:04:16.885 START TEST thread_spdk_lock 00:04:16.885 ************************************ 00:04:16.885 19:04:54 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:16.885 [2024-02-14 19:04:54.081337] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:16.885 [2024-02-14 19:04:54.081580] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:17.453 EAL: TSC is not safe to use in SMP mode 00:04:17.453 EAL: TSC is not invariant 00:04:17.453 [2024-02-14 19:04:54.852551] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.712 [2024-02-14 19:04:54.966478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.712 [2024-02-14 19:04:54.966475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.281 [2024-02-14 19:04:55.407217] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:18.281 [2024-02-14 19:04:55.407284] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:18.281 [2024-02-14 19:04:55.407293] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x3105a0 00:04:18.281 [2024-02-14 19:04:55.407796] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:18.281 [2024-02-14 19:04:55.407896] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:18.281 [2024-02-14 19:04:55.407904] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:18.281 Starting test contend 00:04:18.281 Worker Delay Wait us Hold us Total us 00:04:18.281 0 3 263845 163142 426988 00:04:18.281 1 5 163488 265009 428498 00:04:18.281 PASS test contend 00:04:18.281 Starting test hold_by_poller 00:04:18.281 PASS test hold_by_poller 00:04:18.281 Starting test hold_by_message 00:04:18.281 PASS test hold_by_message 00:04:18.281 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:18.281 100014 assertions passed 00:04:18.281 0 assertions failed 00:04:18.281 00:04:18.281 real 0m1.473s 00:04:18.281 user 0m1.057s 00:04:18.281 sys 0m0.834s 00:04:18.281 19:04:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.281 19:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.281 ************************************ 00:04:18.281 END TEST thread_spdk_lock 00:04:18.281 ************************************ 00:04:18.281 00:04:18.281 real 0m5.866s 00:04:18.281 user 0m3.572s 00:04:18.281 sys 0m2.799s 00:04:18.281 19:04:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.281 19:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.281 ************************************ 00:04:18.281 END TEST thread 00:04:18.281 ************************************ 00:04:18.281 19:04:55 -- spdk/autotest.sh@189 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:18.281 19:04:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:18.281 19:04:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:18.281 19:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.281 ************************************ 00:04:18.281 START TEST accel 00:04:18.281 ************************************ 00:04:18.281 19:04:55 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:18.540 * Looking for test storage... 00:04:18.540 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:04:18.540 19:04:55 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:04:18.540 19:04:55 -- accel/accel.sh@74 -- # get_expected_opcs 00:04:18.540 19:04:55 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:18.540 19:04:55 -- accel/accel.sh@59 -- # spdk_tgt_pid=47249 00:04:18.540 19:04:55 -- accel/accel.sh@60 -- # waitforlisten 47249 00:04:18.540 19:04:55 -- common/autotest_common.sh@817 -- # '[' -z 47249 ']' 00:04:18.540 19:04:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.540 19:04:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:18.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.540 19:04:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.540 19:04:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:18.540 19:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.540 19:04:55 -- accel/accel.sh@58 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.7uK4rn 00:04:18.540 [2024-02-14 19:04:55.822318] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:18.540 [2024-02-14 19:04:55.822601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:19.478 EAL: TSC is not safe to use in SMP mode 00:04:19.478 EAL: TSC is not invariant 00:04:19.478 [2024-02-14 19:04:56.628922] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.478 [2024-02-14 19:04:56.766601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:19.478 [2024-02-14 19:04:56.766751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.478 [2024-02-14 19:04:56.766773] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:19.478 19:04:56 -- accel/accel.sh@58 -- # build_accel_config 00:04:19.478 19:04:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:19.478 19:04:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:19.478 19:04:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:19.478 19:04:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:19.478 19:04:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:19.478 19:04:56 -- accel/accel.sh@41 -- # local IFS=, 00:04:19.478 19:04:56 -- accel/accel.sh@42 -- # jq -r . 00:04:20.412 19:04:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:20.412 19:04:57 -- common/autotest_common.sh@850 -- # return 0 00:04:20.412 19:04:57 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:20.412 19:04:57 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:04:20.412 19:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:20.412 19:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 19:04:57 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:20.412 19:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # IFS== 00:04:20.412 19:04:57 -- accel/accel.sh@64 -- # read -r opc module 00:04:20.412 19:04:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:20.412 19:04:57 -- accel/accel.sh@67 -- # killprocess 47249 00:04:20.412 19:04:57 -- common/autotest_common.sh@924 -- # '[' -z 47249 ']' 00:04:20.412 19:04:57 -- common/autotest_common.sh@928 -- # kill -0 47249 00:04:20.412 19:04:57 -- common/autotest_common.sh@929 -- # uname 00:04:20.412 19:04:57 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:04:20.412 19:04:57 -- common/autotest_common.sh@932 -- # ps -c -o command 47249 00:04:20.412 19:04:57 -- common/autotest_common.sh@932 -- # tail -1 00:04:20.412 19:04:57 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:04:20.412 killing process with pid 47249 00:04:20.412 19:04:57 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:04:20.412 19:04:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 47249' 00:04:20.412 19:04:57 -- common/autotest_common.sh@943 -- # kill 47249 00:04:20.412 [2024-02-14 19:04:57.500047] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:20.412 19:04:57 -- common/autotest_common.sh@948 -- # wait 47249 00:04:20.694 19:04:57 -- accel/accel.sh@68 -- # trap - ERR 00:04:20.694 19:04:57 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:04:20.694 19:04:57 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:04:20.694 19:04:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:20.694 19:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.694 19:04:57 -- common/autotest_common.sh@1102 -- # accel_perf -h 00:04:20.694 19:04:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pTqOPh -h 00:04:20.694 19:04:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.694 19:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.694 19:04:57 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:20.694 19:04:57 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:20.695 19:04:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:20.695 19:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:20.695 ************************************ 00:04:20.695 START TEST accel_missing_filename 00:04:20.695 ************************************ 00:04:20.695 19:04:57 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress 00:04:20.695 19:04:57 -- common/autotest_common.sh@638 -- # local es=0 00:04:20.695 19:04:57 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:20.695 19:04:57 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:20.695 19:04:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:20.695 19:04:57 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:20.695 19:04:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:20.695 19:04:57 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:04:20.695 19:04:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.D2OAsW -t 1 -w compress 00:04:20.695 [2024-02-14 19:04:57.930687] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:20.695 [2024-02-14 19:04:57.930927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:21.289 EAL: TSC is not safe to use in SMP mode 00:04:21.289 EAL: TSC is not invariant 00:04:21.289 [2024-02-14 19:04:58.678046] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.548 [2024-02-14 19:04:58.791683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.548 [2024-02-14 19:04:58.791783] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:21.548 19:04:58 -- accel/accel.sh@12 -- # build_accel_config 00:04:21.548 19:04:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:21.548 19:04:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:21.548 19:04:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:21.548 19:04:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:21.548 19:04:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:21.548 19:04:58 -- accel/accel.sh@41 -- # local IFS=, 00:04:21.548 19:04:58 -- accel/accel.sh@42 -- # jq -r . 00:04:21.548 [2024-02-14 19:04:58.800908] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.548 [2024-02-14 19:04:58.800961] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:21.548 [2024-02-14 19:04:58.857000] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:21.808 A filename is required. 00:04:21.808 19:04:59 -- common/autotest_common.sh@641 -- # es=234 00:04:21.808 19:04:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:21.808 19:04:59 -- common/autotest_common.sh@650 -- # es=106 00:04:21.808 19:04:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:21.808 19:04:59 -- common/autotest_common.sh@658 -- # es=1 00:04:21.808 19:04:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:21.808 00:04:21.808 real 0m1.081s 00:04:21.808 user 0m0.277s 00:04:21.808 sys 0m0.805s 00:04:21.808 19:04:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.808 19:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:21.808 ************************************ 00:04:21.808 END TEST accel_missing_filename 00:04:21.808 ************************************ 00:04:21.808 19:04:59 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:21.808 19:04:59 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:04:21.808 19:04:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:21.808 19:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:21.808 ************************************ 00:04:21.808 START TEST accel_compress_verify 00:04:21.808 ************************************ 00:04:21.808 19:04:59 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:21.808 19:04:59 -- common/autotest_common.sh@638 -- # local es=0 00:04:21.808 19:04:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:21.808 19:04:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:21.808 19:04:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:21.808 19:04:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:21.808 19:04:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:21.808 19:04:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:21.808 19:04:59 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yOsSmO -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:21.808 [2024-02-14 19:04:59.049539] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:21.808 [2024-02-14 19:04:59.049737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:22.745 EAL: TSC is not safe to use in SMP mode 00:04:22.745 EAL: TSC is not invariant 00:04:22.745 [2024-02-14 19:04:59.837312] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.745 [2024-02-14 19:04:59.950954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.745 [2024-02-14 19:04:59.951045] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:22.745 19:04:59 -- accel/accel.sh@12 -- # build_accel_config 00:04:22.745 19:04:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:22.745 19:04:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:22.745 19:04:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:22.745 19:04:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:22.745 19:04:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:22.745 19:04:59 -- accel/accel.sh@41 -- # local IFS=, 00:04:22.745 19:04:59 -- accel/accel.sh@42 -- # jq -r . 00:04:22.745 [2024-02-14 19:04:59.966189] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:22.745 [2024-02-14 19:04:59.966251] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:22.745 [2024-02-14 19:05:00.022618] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:23.004 00:04:23.004 Compression does not support the verify option, aborting. 00:04:23.004 19:05:00 -- common/autotest_common.sh@641 -- # es=211 00:04:23.004 19:05:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:23.004 19:05:00 -- common/autotest_common.sh@650 -- # es=83 00:04:23.004 19:05:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:23.004 19:05:00 -- common/autotest_common.sh@658 -- # es=1 00:04:23.004 19:05:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:23.004 00:04:23.004 real 0m1.127s 00:04:23.004 user 0m0.275s 00:04:23.004 sys 0m0.855s 00:04:23.004 19:05:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.004 19:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 ************************************ 00:04:23.004 END TEST accel_compress_verify 00:04:23.004 ************************************ 00:04:23.004 19:05:00 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:23.004 19:05:00 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:23.004 19:05:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:23.004 19:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 ************************************ 00:04:23.004 START TEST accel_wrong_workload 00:04:23.004 ************************************ 00:04:23.004 19:05:00 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w foobar 00:04:23.004 19:05:00 -- common/autotest_common.sh@638 -- # local es=0 00:04:23.004 19:05:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:23.004 19:05:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:23.004 19:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.004 19:05:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:23.004 19:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.004 19:05:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:04:23.004 19:05:00 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.w4Zk2c -t 1 -w foobar 00:04:23.004 Unsupported workload type: foobar 00:04:23.004 [2024-02-14 19:05:00.216724] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:23.004 accel_perf options: 00:04:23.004 [-h help message] 00:04:23.004 [-q queue depth per core] 00:04:23.004 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:23.004 [-T number of threads per core 00:04:23.004 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:23.004 [-t time in seconds] 00:04:23.004 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:23.004 [ dif_verify, , dif_generate, dif_generate_copy 00:04:23.004 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:23.004 [-l for compress/decompress workloads, name of uncompressed input file 00:04:23.004 [-S for crc32c workload, use this seed value (default 0) 00:04:23.004 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:23.004 [-f for fill workload, use this BYTE value (default 255) 00:04:23.004 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:23.004 [-y verify result if this switch is on] 00:04:23.004 [-a tasks to allocate per core (default: same value as -q)] 00:04:23.004 Can be used to spread operations across a wider range of memory. 00:04:23.004 19:05:00 -- common/autotest_common.sh@641 -- # es=1 00:04:23.004 19:05:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:23.004 19:05:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:23.004 19:05:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:23.004 00:04:23.004 real 0m0.011s 00:04:23.004 user 0m0.002s 00:04:23.004 sys 0m0.008s 00:04:23.004 ************************************ 00:04:23.004 END TEST accel_wrong_workload 00:04:23.004 ************************************ 00:04:23.004 19:05:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.004 19:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 19:05:00 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:23.004 19:05:00 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:04:23.004 19:05:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:23.004 19:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.004 ************************************ 00:04:23.004 START TEST accel_negative_buffers 00:04:23.004 ************************************ 00:04:23.004 19:05:00 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:23.004 19:05:00 -- common/autotest_common.sh@638 -- # local es=0 00:04:23.004 19:05:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:23.004 19:05:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:04:23.004 19:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.004 19:05:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:04:23.005 19:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.005 19:05:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:04:23.005 19:05:00 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hszSCd -t 1 -w xor -y -x -1 00:04:23.005 -x option must be non-negative. 00:04:23.005 [2024-02-14 19:05:00.272329] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:23.005 accel_perf options: 00:04:23.005 [-h help message] 00:04:23.005 [-q queue depth per core] 00:04:23.005 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:23.005 [-T number of threads per core 00:04:23.005 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:23.005 [-t time in seconds] 00:04:23.005 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:23.005 [ dif_verify, , dif_generate, dif_generate_copy 00:04:23.005 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:23.005 [-l for compress/decompress workloads, name of uncompressed input file 00:04:23.005 [-S for crc32c workload, use this seed value (default 0) 00:04:23.005 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:23.005 [-f for fill workload, use this BYTE value (default 255) 00:04:23.005 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:23.005 [-y verify result if this switch is on] 00:04:23.005 [-a tasks to allocate per core (default: same value as -q)] 00:04:23.005 Can be used to spread operations across a wider range of memory. 00:04:23.005 19:05:00 -- common/autotest_common.sh@641 -- # es=1 00:04:23.005 19:05:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:23.005 19:05:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:23.005 19:05:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:23.005 ************************************ 00:04:23.005 END TEST accel_negative_buffers 00:04:23.005 ************************************ 00:04:23.005 00:04:23.005 real 0m0.012s 00:04:23.005 user 0m0.002s 00:04:23.005 sys 0m0.009s 00:04:23.005 19:05:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.005 19:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.005 19:05:00 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:23.005 19:05:00 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:04:23.005 19:05:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:23.005 19:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:23.005 ************************************ 00:04:23.005 START TEST accel_crc32c 00:04:23.005 ************************************ 00:04:23.005 19:05:00 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:23.005 19:05:00 -- accel/accel.sh@16 -- # local accel_opc 00:04:23.005 19:05:00 -- accel/accel.sh@17 -- # local accel_module 00:04:23.005 19:05:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:23.005 19:05:00 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xkGLde -t 1 -w crc32c -S 32 -y 00:04:23.005 [2024-02-14 19:05:00.324884] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:23.005 [2024-02-14 19:05:00.325107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:23.941 EAL: TSC is not safe to use in SMP mode 00:04:23.941 EAL: TSC is not invariant 00:04:23.941 [2024-02-14 19:05:01.092547] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.941 [2024-02-14 19:05:01.208332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.942 [2024-02-14 19:05:01.208420] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:23.942 19:05:01 -- accel/accel.sh@12 -- # build_accel_config 00:04:23.942 19:05:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:23.942 19:05:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:23.942 19:05:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:23.942 19:05:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:23.942 19:05:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:23.942 19:05:01 -- accel/accel.sh@41 -- # local IFS=, 00:04:23.942 19:05:01 -- accel/accel.sh@42 -- # jq -r . 00:04:24.878 [2024-02-14 19:05:02.222722] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:25.136 19:05:02 -- accel/accel.sh@18 -- # out=' 00:04:25.136 SPDK Configuration: 00:04:25.136 Core mask: 0x1 00:04:25.136 00:04:25.136 Accel Perf Configuration: 00:04:25.136 Workload Type: crc32c 00:04:25.136 CRC-32C seed: 32 00:04:25.136 Transfer size: 4096 bytes 00:04:25.136 Vector count 1 00:04:25.136 Module: software 00:04:25.136 Queue depth: 32 00:04:25.136 Allocate depth: 32 00:04:25.136 # threads/core: 1 00:04:25.136 Run time: 1 seconds 00:04:25.136 Verify: Yes 00:04:25.136 00:04:25.136 Running for 1 seconds... 00:04:25.136 00:04:25.136 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:25.136 ------------------------------------------------------------------------------------ 00:04:25.136 0,0 2301376/s 8989 MiB/s 0 0 00:04:25.136 ==================================================================================== 00:04:25.136 Total 2301376/s 8989 MiB/s 0 0' 00:04:25.136 19:05:02 -- accel/accel.sh@20 -- # IFS=: 00:04:25.136 19:05:02 -- accel/accel.sh@20 -- # read -r var val 00:04:25.136 19:05:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:25.137 19:05:02 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pXPubi -t 1 -w crc32c -S 32 -y 00:04:25.137 [2024-02-14 19:05:02.436849] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:25.137 [2024-02-14 19:05:02.437205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:26.072 EAL: TSC is not safe to use in SMP mode 00:04:26.072 EAL: TSC is not invariant 00:04:26.072 [2024-02-14 19:05:03.195520] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.072 [2024-02-14 19:05:03.315536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.072 [2024-02-14 19:05:03.315650] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:26.072 19:05:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:26.072 19:05:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:26.072 19:05:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:26.072 19:05:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:26.072 19:05:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:26.072 19:05:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:26.072 19:05:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:26.072 19:05:03 -- accel/accel.sh@42 -- # jq -r . 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=0x1 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=crc32c 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=32 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=software 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@23 -- # accel_module=software 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=32 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=32 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=1 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val=Yes 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:26.072 19:05:03 -- accel/accel.sh@21 -- # val= 00:04:26.072 19:05:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # IFS=: 00:04:26.072 19:05:03 -- accel/accel.sh@20 -- # read -r var val 00:04:27.046 [2024-02-14 19:05:04.328260] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:27.305 19:05:04 -- accel/accel.sh@21 -- # val= 00:04:27.305 19:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # IFS=: 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # read -r var val 00:04:27.305 19:05:04 -- accel/accel.sh@21 -- # val= 00:04:27.305 19:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # IFS=: 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # read -r var val 00:04:27.305 19:05:04 -- accel/accel.sh@21 -- # val= 00:04:27.305 19:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # IFS=: 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # read -r var val 00:04:27.305 19:05:04 -- accel/accel.sh@21 -- # val= 00:04:27.305 19:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # IFS=: 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # read -r var val 00:04:27.305 19:05:04 -- accel/accel.sh@21 -- # val= 00:04:27.305 19:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # IFS=: 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # read -r var val 00:04:27.305 19:05:04 -- accel/accel.sh@21 -- # val= 00:04:27.305 19:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # IFS=: 00:04:27.305 19:05:04 -- accel/accel.sh@20 -- # read -r var val 00:04:27.305 19:05:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:27.305 19:05:04 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:27.305 19:05:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:27.305 00:04:27.305 real 0m4.214s 00:04:27.305 user 0m2.551s 00:04:27.305 sys 0m1.677s 00:04:27.305 19:05:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:27.305 19:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:27.305 ************************************ 00:04:27.305 END TEST accel_crc32c 00:04:27.305 ************************************ 00:04:27.305 19:05:04 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:27.306 19:05:04 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:04:27.306 19:05:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:27.306 19:05:04 -- common/autotest_common.sh@10 -- # set +x 00:04:27.306 ************************************ 00:04:27.306 START TEST accel_crc32c_C2 00:04:27.306 ************************************ 00:04:27.306 19:05:04 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:27.306 19:05:04 -- accel/accel.sh@16 -- # local accel_opc 00:04:27.306 19:05:04 -- accel/accel.sh@17 -- # local accel_module 00:04:27.306 19:05:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:27.306 19:05:04 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.BYGz85 -t 1 -w crc32c -y -C 2 00:04:27.306 [2024-02-14 19:05:04.585165] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:27.306 [2024-02-14 19:05:04.585498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:28.242 EAL: TSC is not safe to use in SMP mode 00:04:28.242 EAL: TSC is not invariant 00:04:28.242 [2024-02-14 19:05:05.317299] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.242 [2024-02-14 19:05:05.448299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.242 [2024-02-14 19:05:05.448403] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:28.242 19:05:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:28.242 19:05:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:28.242 19:05:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:28.242 19:05:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:28.242 19:05:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:28.242 19:05:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:28.242 19:05:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:28.242 19:05:05 -- accel/accel.sh@42 -- # jq -r . 00:04:29.176 [2024-02-14 19:05:06.461210] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:29.435 19:05:06 -- accel/accel.sh@18 -- # out=' 00:04:29.435 SPDK Configuration: 00:04:29.435 Core mask: 0x1 00:04:29.435 00:04:29.435 Accel Perf Configuration: 00:04:29.435 Workload Type: crc32c 00:04:29.435 CRC-32C seed: 0 00:04:29.435 Transfer size: 4096 bytes 00:04:29.435 Vector count 2 00:04:29.435 Module: software 00:04:29.435 Queue depth: 32 00:04:29.435 Allocate depth: 32 00:04:29.435 # threads/core: 1 00:04:29.435 Run time: 1 seconds 00:04:29.435 Verify: Yes 00:04:29.435 00:04:29.435 Running for 1 seconds... 00:04:29.435 00:04:29.435 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:29.435 ------------------------------------------------------------------------------------ 00:04:29.435 0,0 1328256/s 10377 MiB/s 0 0 00:04:29.435 ==================================================================================== 00:04:29.435 Total 1328256/s 5188 MiB/s 0 0' 00:04:29.435 19:05:06 -- accel/accel.sh@20 -- # IFS=: 00:04:29.435 19:05:06 -- accel/accel.sh@20 -- # read -r var val 00:04:29.435 19:05:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:29.435 19:05:06 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.WyYl5G -t 1 -w crc32c -y -C 2 00:04:29.435 [2024-02-14 19:05:06.673013] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:29.435 [2024-02-14 19:05:06.673319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:30.372 EAL: TSC is not safe to use in SMP mode 00:04:30.372 EAL: TSC is not invariant 00:04:30.372 [2024-02-14 19:05:07.435912] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.373 [2024-02-14 19:05:07.552585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.373 [2024-02-14 19:05:07.552681] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:30.373 19:05:07 -- accel/accel.sh@12 -- # build_accel_config 00:04:30.373 19:05:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:30.373 19:05:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:30.373 19:05:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:30.373 19:05:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:30.373 19:05:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:30.373 19:05:07 -- accel/accel.sh@41 -- # local IFS=, 00:04:30.373 19:05:07 -- accel/accel.sh@42 -- # jq -r . 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=0x1 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=crc32c 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=0 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=software 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@23 -- # accel_module=software 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=32 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=32 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=1 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val=Yes 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:30.373 19:05:07 -- accel/accel.sh@21 -- # val= 00:04:30.373 19:05:07 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # IFS=: 00:04:30.373 19:05:07 -- accel/accel.sh@20 -- # read -r var val 00:04:31.309 [2024-02-14 19:05:08.566471] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:31.567 19:05:08 -- accel/accel.sh@21 -- # val= 00:04:31.567 19:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:31.567 19:05:08 -- accel/accel.sh@20 -- # IFS=: 00:04:31.567 19:05:08 -- accel/accel.sh@20 -- # read -r var val 00:04:31.567 19:05:08 -- accel/accel.sh@21 -- # val= 00:04:31.567 19:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:31.567 19:05:08 -- accel/accel.sh@20 -- # IFS=: 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # read -r var val 00:04:31.568 19:05:08 -- accel/accel.sh@21 -- # val= 00:04:31.568 19:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # IFS=: 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # read -r var val 00:04:31.568 19:05:08 -- accel/accel.sh@21 -- # val= 00:04:31.568 19:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # IFS=: 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # read -r var val 00:04:31.568 19:05:08 -- accel/accel.sh@21 -- # val= 00:04:31.568 19:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # IFS=: 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # read -r var val 00:04:31.568 19:05:08 -- accel/accel.sh@21 -- # val= 00:04:31.568 19:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # IFS=: 00:04:31.568 19:05:08 -- accel/accel.sh@20 -- # read -r var val 00:04:31.568 19:05:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:31.568 19:05:08 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:31.568 19:05:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:31.568 00:04:31.568 real 0m4.198s 00:04:31.568 user 0m2.571s 00:04:31.568 sys 0m1.636s 00:04:31.568 19:05:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.568 19:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.568 ************************************ 00:04:31.568 END TEST accel_crc32c_C2 00:04:31.568 ************************************ 00:04:31.568 19:05:08 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:31.568 19:05:08 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:31.568 19:05:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:31.568 19:05:08 -- common/autotest_common.sh@10 -- # set +x 00:04:31.568 ************************************ 00:04:31.568 START TEST accel_copy 00:04:31.568 ************************************ 00:04:31.568 19:05:08 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy -y 00:04:31.568 19:05:08 -- accel/accel.sh@16 -- # local accel_opc 00:04:31.568 19:05:08 -- accel/accel.sh@17 -- # local accel_module 00:04:31.568 19:05:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:04:31.568 19:05:08 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZXOGvt -t 1 -w copy -y 00:04:31.568 [2024-02-14 19:05:08.825153] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:31.568 [2024-02-14 19:05:08.825392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:32.503 EAL: TSC is not safe to use in SMP mode 00:04:32.503 EAL: TSC is not invariant 00:04:32.503 [2024-02-14 19:05:09.588329] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.503 [2024-02-14 19:05:09.704972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.503 [2024-02-14 19:05:09.705068] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:32.503 19:05:09 -- accel/accel.sh@12 -- # build_accel_config 00:04:32.503 19:05:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:32.503 19:05:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:32.503 19:05:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:32.503 19:05:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:32.503 19:05:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:32.503 19:05:09 -- accel/accel.sh@41 -- # local IFS=, 00:04:32.503 19:05:09 -- accel/accel.sh@42 -- # jq -r . 00:04:33.438 [2024-02-14 19:05:10.719792] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:33.696 19:05:10 -- accel/accel.sh@18 -- # out=' 00:04:33.696 SPDK Configuration: 00:04:33.696 Core mask: 0x1 00:04:33.696 00:04:33.696 Accel Perf Configuration: 00:04:33.696 Workload Type: copy 00:04:33.696 Transfer size: 4096 bytes 00:04:33.696 Vector count 1 00:04:33.696 Module: software 00:04:33.696 Queue depth: 32 00:04:33.697 Allocate depth: 32 00:04:33.697 # threads/core: 1 00:04:33.697 Run time: 1 seconds 00:04:33.697 Verify: Yes 00:04:33.697 00:04:33.697 Running for 1 seconds... 00:04:33.697 00:04:33.697 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:33.697 ------------------------------------------------------------------------------------ 00:04:33.697 0,0 2161120/s 8441 MiB/s 0 0 00:04:33.697 ==================================================================================== 00:04:33.697 Total 2161120/s 8441 MiB/s 0 0' 00:04:33.697 19:05:10 -- accel/accel.sh@20 -- # IFS=: 00:04:33.697 19:05:10 -- accel/accel.sh@20 -- # read -r var val 00:04:33.697 19:05:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:33.697 19:05:10 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ALFjqT -t 1 -w copy -y 00:04:33.697 [2024-02-14 19:05:10.930310] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:33.697 [2024-02-14 19:05:10.930643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:34.632 EAL: TSC is not safe to use in SMP mode 00:04:34.632 EAL: TSC is not invariant 00:04:34.632 [2024-02-14 19:05:11.708430] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.632 [2024-02-14 19:05:11.824481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.632 [2024-02-14 19:05:11.824570] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:34.632 19:05:11 -- accel/accel.sh@12 -- # build_accel_config 00:04:34.632 19:05:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:34.632 19:05:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:34.632 19:05:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:34.632 19:05:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:34.632 19:05:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:34.632 19:05:11 -- accel/accel.sh@41 -- # local IFS=, 00:04:34.632 19:05:11 -- accel/accel.sh@42 -- # jq -r . 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=0x1 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=copy 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@24 -- # accel_opc=copy 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=software 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@23 -- # accel_module=software 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=32 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=32 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=1 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.632 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.632 19:05:11 -- accel/accel.sh@21 -- # val=Yes 00:04:34.632 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.633 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.633 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.633 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.633 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.633 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.633 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:34.633 19:05:11 -- accel/accel.sh@21 -- # val= 00:04:34.633 19:05:11 -- accel/accel.sh@22 -- # case "$var" in 00:04:34.633 19:05:11 -- accel/accel.sh@20 -- # IFS=: 00:04:34.633 19:05:11 -- accel/accel.sh@20 -- # read -r var val 00:04:35.569 [2024-02-14 19:05:12.840034] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:35.828 19:05:13 -- accel/accel.sh@21 -- # val= 00:04:35.828 19:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # IFS=: 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # read -r var val 00:04:35.828 19:05:13 -- accel/accel.sh@21 -- # val= 00:04:35.828 19:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # IFS=: 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # read -r var val 00:04:35.828 19:05:13 -- accel/accel.sh@21 -- # val= 00:04:35.828 19:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # IFS=: 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # read -r var val 00:04:35.828 19:05:13 -- accel/accel.sh@21 -- # val= 00:04:35.828 19:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # IFS=: 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # read -r var val 00:04:35.828 19:05:13 -- accel/accel.sh@21 -- # val= 00:04:35.828 19:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # IFS=: 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # read -r var val 00:04:35.828 19:05:13 -- accel/accel.sh@21 -- # val= 00:04:35.828 19:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # IFS=: 00:04:35.828 19:05:13 -- accel/accel.sh@20 -- # read -r var val 00:04:35.828 19:05:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:35.828 19:05:13 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:04:35.828 19:05:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:35.828 00:04:35.828 real 0m4.227s 00:04:35.828 user 0m2.546s 00:04:35.828 sys 0m1.692s 00:04:35.828 19:05:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.828 19:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.828 ************************************ 00:04:35.828 END TEST accel_copy 00:04:35.828 ************************************ 00:04:35.828 19:05:13 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:35.828 19:05:13 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:04:35.828 19:05:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:35.828 19:05:13 -- common/autotest_common.sh@10 -- # set +x 00:04:35.828 ************************************ 00:04:35.828 START TEST accel_fill 00:04:35.828 ************************************ 00:04:35.828 19:05:13 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:35.828 19:05:13 -- accel/accel.sh@16 -- # local accel_opc 00:04:35.828 19:05:13 -- accel/accel.sh@17 -- # local accel_module 00:04:35.828 19:05:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:35.828 19:05:13 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yQk6aI -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:35.828 [2024-02-14 19:05:13.100181] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:35.828 [2024-02-14 19:05:13.100648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:36.762 EAL: TSC is not safe to use in SMP mode 00:04:36.762 EAL: TSC is not invariant 00:04:36.762 [2024-02-14 19:05:13.858351] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.762 [2024-02-14 19:05:13.973604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.762 [2024-02-14 19:05:13.973688] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:36.762 19:05:13 -- accel/accel.sh@12 -- # build_accel_config 00:04:36.762 19:05:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:36.762 19:05:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:36.762 19:05:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:36.762 19:05:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:36.762 19:05:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:36.762 19:05:13 -- accel/accel.sh@41 -- # local IFS=, 00:04:36.762 19:05:13 -- accel/accel.sh@42 -- # jq -r . 00:04:37.698 [2024-02-14 19:05:14.985610] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:37.956 19:05:15 -- accel/accel.sh@18 -- # out=' 00:04:37.956 SPDK Configuration: 00:04:37.956 Core mask: 0x1 00:04:37.956 00:04:37.956 Accel Perf Configuration: 00:04:37.956 Workload Type: fill 00:04:37.956 Fill pattern: 0x80 00:04:37.956 Transfer size: 4096 bytes 00:04:37.956 Vector count 1 00:04:37.956 Module: software 00:04:37.956 Queue depth: 64 00:04:37.957 Allocate depth: 64 00:04:37.957 # threads/core: 1 00:04:37.957 Run time: 1 seconds 00:04:37.957 Verify: Yes 00:04:37.957 00:04:37.957 Running for 1 seconds... 00:04:37.957 00:04:37.957 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:37.957 ------------------------------------------------------------------------------------ 00:04:37.957 0,0 2481536/s 9693 MiB/s 0 0 00:04:37.957 ==================================================================================== 00:04:37.957 Total 2481536/s 9693 MiB/s 0 0' 00:04:37.957 19:05:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:37.957 19:05:15 -- accel/accel.sh@20 -- # IFS=: 00:04:37.957 19:05:15 -- accel/accel.sh@20 -- # read -r var val 00:04:37.957 19:05:15 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kxvfOa -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:37.957 [2024-02-14 19:05:15.190849] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:37.957 [2024-02-14 19:05:15.191180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:38.525 EAL: TSC is not safe to use in SMP mode 00:04:38.525 EAL: TSC is not invariant 00:04:38.525 [2024-02-14 19:05:15.927951] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.784 [2024-02-14 19:05:16.038468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.784 [2024-02-14 19:05:16.038542] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:38.784 19:05:16 -- accel/accel.sh@12 -- # build_accel_config 00:04:38.784 19:05:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:38.784 19:05:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:38.784 19:05:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:38.784 19:05:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:38.784 19:05:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:38.784 19:05:16 -- accel/accel.sh@41 -- # local IFS=, 00:04:38.784 19:05:16 -- accel/accel.sh@42 -- # jq -r . 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=0x1 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=fill 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@24 -- # accel_opc=fill 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=0x80 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=software 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@23 -- # accel_module=software 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=64 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=64 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val=1 00:04:38.784 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.784 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.784 19:05:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:38.785 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.785 19:05:16 -- accel/accel.sh@21 -- # val=Yes 00:04:38.785 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.785 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.785 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:38.785 19:05:16 -- accel/accel.sh@21 -- # val= 00:04:38.785 19:05:16 -- accel/accel.sh@22 -- # case "$var" in 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # IFS=: 00:04:38.785 19:05:16 -- accel/accel.sh@20 -- # read -r var val 00:04:39.721 [2024-02-14 19:05:17.057270] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:39.980 19:05:17 -- accel/accel.sh@21 -- # val= 00:04:39.980 19:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # IFS=: 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # read -r var val 00:04:39.980 19:05:17 -- accel/accel.sh@21 -- # val= 00:04:39.980 19:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # IFS=: 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # read -r var val 00:04:39.980 19:05:17 -- accel/accel.sh@21 -- # val= 00:04:39.980 19:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # IFS=: 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # read -r var val 00:04:39.980 19:05:17 -- accel/accel.sh@21 -- # val= 00:04:39.980 19:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # IFS=: 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # read -r var val 00:04:39.980 19:05:17 -- accel/accel.sh@21 -- # val= 00:04:39.980 19:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # IFS=: 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # read -r var val 00:04:39.980 19:05:17 -- accel/accel.sh@21 -- # val= 00:04:39.980 19:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # IFS=: 00:04:39.980 19:05:17 -- accel/accel.sh@20 -- # read -r var val 00:04:39.980 19:05:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:39.980 19:05:17 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:04:39.980 19:05:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:39.980 00:04:39.980 real 0m4.169s 00:04:39.980 user 0m2.559s 00:04:39.980 sys 0m1.619s 00:04:39.980 19:05:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.980 ************************************ 00:04:39.980 19:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:39.980 END TEST accel_fill 00:04:39.980 ************************************ 00:04:39.980 19:05:17 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:39.980 19:05:17 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:39.980 19:05:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:39.980 19:05:17 -- common/autotest_common.sh@10 -- # set +x 00:04:39.980 ************************************ 00:04:39.980 START TEST accel_copy_crc32c 00:04:39.980 ************************************ 00:04:39.980 19:05:17 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y 00:04:39.980 19:05:17 -- accel/accel.sh@16 -- # local accel_opc 00:04:39.980 19:05:17 -- accel/accel.sh@17 -- # local accel_module 00:04:39.980 19:05:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:39.980 19:05:17 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.81eeN5 -t 1 -w copy_crc32c -y 00:04:39.980 [2024-02-14 19:05:17.311417] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:39.980 [2024-02-14 19:05:17.311708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:40.917 EAL: TSC is not safe to use in SMP mode 00:04:40.917 EAL: TSC is not invariant 00:04:40.917 [2024-02-14 19:05:18.056480] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.917 [2024-02-14 19:05:18.168572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.917 [2024-02-14 19:05:18.168654] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:40.917 19:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:04:40.917 19:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:40.917 19:05:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:40.917 19:05:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:40.917 19:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:40.917 19:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:40.917 19:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:04:40.917 19:05:18 -- accel/accel.sh@42 -- # jq -r . 00:04:41.853 [2024-02-14 19:05:19.184711] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:42.112 19:05:19 -- accel/accel.sh@18 -- # out=' 00:04:42.112 SPDK Configuration: 00:04:42.112 Core mask: 0x1 00:04:42.112 00:04:42.112 Accel Perf Configuration: 00:04:42.112 Workload Type: copy_crc32c 00:04:42.112 CRC-32C seed: 0 00:04:42.112 Vector size: 4096 bytes 00:04:42.112 Transfer size: 4096 bytes 00:04:42.112 Vector count 1 00:04:42.112 Module: software 00:04:42.112 Queue depth: 32 00:04:42.112 Allocate depth: 32 00:04:42.112 # threads/core: 1 00:04:42.112 Run time: 1 seconds 00:04:42.112 Verify: Yes 00:04:42.112 00:04:42.112 Running for 1 seconds... 00:04:42.112 00:04:42.112 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:42.112 ------------------------------------------------------------------------------------ 00:04:42.112 0,0 1302912/s 5089 MiB/s 0 0 00:04:42.112 ==================================================================================== 00:04:42.112 Total 1302912/s 5089 MiB/s 0 0' 00:04:42.112 19:05:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:42.112 19:05:19 -- accel/accel.sh@20 -- # IFS=: 00:04:42.112 19:05:19 -- accel/accel.sh@20 -- # read -r var val 00:04:42.112 19:05:19 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.4MSogT -t 1 -w copy_crc32c -y 00:04:42.112 [2024-02-14 19:05:19.389335] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:42.112 [2024-02-14 19:05:19.389659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:43.048 EAL: TSC is not safe to use in SMP mode 00:04:43.048 EAL: TSC is not invariant 00:04:43.048 [2024-02-14 19:05:20.119885] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.048 [2024-02-14 19:05:20.231408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.048 [2024-02-14 19:05:20.231491] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:43.048 19:05:20 -- accel/accel.sh@12 -- # build_accel_config 00:04:43.048 19:05:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:43.048 19:05:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:43.048 19:05:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:43.048 19:05:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:43.048 19:05:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:43.048 19:05:20 -- accel/accel.sh@41 -- # local IFS=, 00:04:43.048 19:05:20 -- accel/accel.sh@42 -- # jq -r . 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=0x1 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=0 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=software 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@23 -- # accel_module=software 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=32 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=32 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=1 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val=Yes 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:43.048 19:05:20 -- accel/accel.sh@21 -- # val= 00:04:43.048 19:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # IFS=: 00:04:43.048 19:05:20 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 [2024-02-14 19:05:21.241915] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:44.055 19:05:21 -- accel/accel.sh@21 -- # val= 00:04:44.055 19:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # IFS=: 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 19:05:21 -- accel/accel.sh@21 -- # val= 00:04:44.055 19:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # IFS=: 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 19:05:21 -- accel/accel.sh@21 -- # val= 00:04:44.055 19:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # IFS=: 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 19:05:21 -- accel/accel.sh@21 -- # val= 00:04:44.055 19:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # IFS=: 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 19:05:21 -- accel/accel.sh@21 -- # val= 00:04:44.055 19:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # IFS=: 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 19:05:21 -- accel/accel.sh@21 -- # val= 00:04:44.055 19:05:21 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # IFS=: 00:04:44.055 19:05:21 -- accel/accel.sh@20 -- # read -r var val 00:04:44.055 19:05:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:44.055 19:05:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:44.055 19:05:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:44.055 00:04:44.055 real 0m4.141s 00:04:44.055 user 0m2.522s 00:04:44.055 sys 0m1.627s 00:04:44.055 19:05:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.055 ************************************ 00:04:44.055 END TEST accel_copy_crc32c 00:04:44.055 ************************************ 00:04:44.055 19:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.315 19:05:21 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:44.315 19:05:21 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:04:44.315 19:05:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:44.315 19:05:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.315 ************************************ 00:04:44.315 START TEST accel_copy_crc32c_C2 00:04:44.315 ************************************ 00:04:44.315 19:05:21 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:44.315 19:05:21 -- accel/accel.sh@16 -- # local accel_opc 00:04:44.315 19:05:21 -- accel/accel.sh@17 -- # local accel_module 00:04:44.315 19:05:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:44.315 19:05:21 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ibT9hc -t 1 -w copy_crc32c -y -C 2 00:04:44.315 [2024-02-14 19:05:21.493889] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:44.315 [2024-02-14 19:05:21.494201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:44.883 EAL: TSC is not safe to use in SMP mode 00:04:44.883 EAL: TSC is not invariant 00:04:44.883 [2024-02-14 19:05:22.248661] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.141 [2024-02-14 19:05:22.379162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.141 [2024-02-14 19:05:22.379254] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:45.141 19:05:22 -- accel/accel.sh@12 -- # build_accel_config 00:04:45.141 19:05:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:45.141 19:05:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:45.141 19:05:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:45.141 19:05:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:45.141 19:05:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:45.141 19:05:22 -- accel/accel.sh@41 -- # local IFS=, 00:04:45.141 19:05:22 -- accel/accel.sh@42 -- # jq -r . 00:04:46.075 [2024-02-14 19:05:23.392893] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:46.333 19:05:23 -- accel/accel.sh@18 -- # out=' 00:04:46.333 SPDK Configuration: 00:04:46.333 Core mask: 0x1 00:04:46.333 00:04:46.333 Accel Perf Configuration: 00:04:46.333 Workload Type: copy_crc32c 00:04:46.333 CRC-32C seed: 0 00:04:46.333 Vector size: 4096 bytes 00:04:46.333 Transfer size: 8192 bytes 00:04:46.333 Vector count 2 00:04:46.333 Module: software 00:04:46.333 Queue depth: 32 00:04:46.333 Allocate depth: 32 00:04:46.333 # threads/core: 1 00:04:46.333 Run time: 1 seconds 00:04:46.333 Verify: Yes 00:04:46.333 00:04:46.333 Running for 1 seconds... 00:04:46.333 00:04:46.333 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:46.333 ------------------------------------------------------------------------------------ 00:04:46.333 0,0 687552/s 5371 MiB/s 0 0 00:04:46.333 ==================================================================================== 00:04:46.333 Total 687552/s 2685 MiB/s 0 0' 00:04:46.333 19:05:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:46.333 19:05:23 -- accel/accel.sh@20 -- # IFS=: 00:04:46.333 19:05:23 -- accel/accel.sh@20 -- # read -r var val 00:04:46.333 19:05:23 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.prk1uX -t 1 -w copy_crc32c -y -C 2 00:04:46.333 [2024-02-14 19:05:23.601926] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:46.333 [2024-02-14 19:05:23.602278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:47.268 EAL: TSC is not safe to use in SMP mode 00:04:47.268 EAL: TSC is not invariant 00:04:47.268 [2024-02-14 19:05:24.357727] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.268 [2024-02-14 19:05:24.488022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.268 [2024-02-14 19:05:24.488115] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:47.268 19:05:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:47.268 19:05:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:47.268 19:05:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:47.268 19:05:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:47.268 19:05:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:47.268 19:05:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:47.268 19:05:24 -- accel/accel.sh@41 -- # local IFS=, 00:04:47.268 19:05:24 -- accel/accel.sh@42 -- # jq -r . 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=0x1 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=0 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val='8192 bytes' 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=software 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@23 -- # accel_module=software 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=32 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=32 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=1 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val=Yes 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:47.268 19:05:24 -- accel/accel.sh@21 -- # val= 00:04:47.268 19:05:24 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # IFS=: 00:04:47.268 19:05:24 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 [2024-02-14 19:05:25.504498] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:48.316 19:05:25 -- accel/accel.sh@21 -- # val= 00:04:48.316 19:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # IFS=: 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 19:05:25 -- accel/accel.sh@21 -- # val= 00:04:48.316 19:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # IFS=: 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 19:05:25 -- accel/accel.sh@21 -- # val= 00:04:48.316 19:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # IFS=: 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 19:05:25 -- accel/accel.sh@21 -- # val= 00:04:48.316 19:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # IFS=: 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 19:05:25 -- accel/accel.sh@21 -- # val= 00:04:48.316 19:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # IFS=: 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 19:05:25 -- accel/accel.sh@21 -- # val= 00:04:48.316 19:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # IFS=: 00:04:48.316 19:05:25 -- accel/accel.sh@20 -- # read -r var val 00:04:48.316 19:05:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:48.316 19:05:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:48.316 19:05:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:48.316 00:04:48.316 real 0m4.228s 00:04:48.316 user 0m2.594s 00:04:48.316 sys 0m1.641s 00:04:48.316 19:05:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.316 19:05:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.316 ************************************ 00:04:48.316 END TEST accel_copy_crc32c_C2 00:04:48.316 ************************************ 00:04:48.574 19:05:25 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:48.574 19:05:25 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:48.575 19:05:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:48.575 19:05:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.575 ************************************ 00:04:48.575 START TEST accel_dualcast 00:04:48.575 ************************************ 00:04:48.575 19:05:25 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dualcast -y 00:04:48.575 19:05:25 -- accel/accel.sh@16 -- # local accel_opc 00:04:48.575 19:05:25 -- accel/accel.sh@17 -- # local accel_module 00:04:48.575 19:05:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:04:48.575 19:05:25 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.nwrteL -t 1 -w dualcast -y 00:04:48.575 [2024-02-14 19:05:25.764915] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:48.575 [2024-02-14 19:05:25.765180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:49.142 EAL: TSC is not safe to use in SMP mode 00:04:49.142 EAL: TSC is not invariant 00:04:49.142 [2024-02-14 19:05:26.529694] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.400 [2024-02-14 19:05:26.643105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.400 [2024-02-14 19:05:26.643187] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:49.400 19:05:26 -- accel/accel.sh@12 -- # build_accel_config 00:04:49.400 19:05:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:49.400 19:05:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.400 19:05:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.400 19:05:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:49.400 19:05:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:49.400 19:05:26 -- accel/accel.sh@41 -- # local IFS=, 00:04:49.400 19:05:26 -- accel/accel.sh@42 -- # jq -r . 00:04:50.336 [2024-02-14 19:05:27.659585] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:50.594 19:05:27 -- accel/accel.sh@18 -- # out=' 00:04:50.594 SPDK Configuration: 00:04:50.594 Core mask: 0x1 00:04:50.594 00:04:50.594 Accel Perf Configuration: 00:04:50.594 Workload Type: dualcast 00:04:50.594 Transfer size: 4096 bytes 00:04:50.594 Vector count 1 00:04:50.594 Module: software 00:04:50.594 Queue depth: 32 00:04:50.594 Allocate depth: 32 00:04:50.594 # threads/core: 1 00:04:50.594 Run time: 1 seconds 00:04:50.594 Verify: Yes 00:04:50.594 00:04:50.594 Running for 1 seconds... 00:04:50.594 00:04:50.594 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:50.594 ------------------------------------------------------------------------------------ 00:04:50.594 0,0 1526048/s 5961 MiB/s 0 0 00:04:50.594 ==================================================================================== 00:04:50.594 Total 1526048/s 5961 MiB/s 0 0' 00:04:50.594 19:05:27 -- accel/accel.sh@20 -- # IFS=: 00:04:50.594 19:05:27 -- accel/accel.sh@20 -- # read -r var val 00:04:50.594 19:05:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:50.594 19:05:27 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kUKAdN -t 1 -w dualcast -y 00:04:50.594 [2024-02-14 19:05:27.871060] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:50.594 [2024-02-14 19:05:27.871417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:51.526 EAL: TSC is not safe to use in SMP mode 00:04:51.526 EAL: TSC is not invariant 00:04:51.526 [2024-02-14 19:05:28.636727] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.526 [2024-02-14 19:05:28.750678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.526 [2024-02-14 19:05:28.750765] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:51.526 19:05:28 -- accel/accel.sh@12 -- # build_accel_config 00:04:51.526 19:05:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:51.526 19:05:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.526 19:05:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.526 19:05:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:51.526 19:05:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:51.526 19:05:28 -- accel/accel.sh@41 -- # local IFS=, 00:04:51.526 19:05:28 -- accel/accel.sh@42 -- # jq -r . 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.526 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.526 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val=0x1 00:04:51.526 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.526 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.526 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val=dualcast 00:04:51.526 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.526 19:05:28 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.526 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.526 19:05:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val=software 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@23 -- # accel_module=software 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val=32 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val=32 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val=1 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val=Yes 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:51.527 19:05:28 -- accel/accel.sh@21 -- # val= 00:04:51.527 19:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # IFS=: 00:04:51.527 19:05:28 -- accel/accel.sh@20 -- # read -r var val 00:04:52.460 [2024-02-14 19:05:29.762907] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:52.718 19:05:29 -- accel/accel.sh@21 -- # val= 00:04:52.718 19:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.718 19:05:29 -- accel/accel.sh@20 -- # IFS=: 00:04:52.718 19:05:29 -- accel/accel.sh@20 -- # read -r var val 00:04:52.718 19:05:29 -- accel/accel.sh@21 -- # val= 00:04:52.718 19:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.718 19:05:29 -- accel/accel.sh@20 -- # IFS=: 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # read -r var val 00:04:52.719 19:05:29 -- accel/accel.sh@21 -- # val= 00:04:52.719 19:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # IFS=: 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # read -r var val 00:04:52.719 19:05:29 -- accel/accel.sh@21 -- # val= 00:04:52.719 19:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # IFS=: 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # read -r var val 00:04:52.719 19:05:29 -- accel/accel.sh@21 -- # val= 00:04:52.719 19:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # IFS=: 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # read -r var val 00:04:52.719 19:05:29 -- accel/accel.sh@21 -- # val= 00:04:52.719 19:05:29 -- accel/accel.sh@22 -- # case "$var" in 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # IFS=: 00:04:52.719 19:05:29 -- accel/accel.sh@20 -- # read -r var val 00:04:52.719 19:05:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:52.719 19:05:29 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:04:52.719 19:05:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:52.719 00:04:52.719 real 0m4.211s 00:04:52.719 user 0m2.552s 00:04:52.719 sys 0m1.672s 00:04:52.719 19:05:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.719 19:05:29 -- common/autotest_common.sh@10 -- # set +x 00:04:52.719 ************************************ 00:04:52.719 END TEST accel_dualcast 00:04:52.719 ************************************ 00:04:52.719 19:05:30 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:52.719 19:05:30 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:52.719 19:05:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:52.719 19:05:30 -- common/autotest_common.sh@10 -- # set +x 00:04:52.719 ************************************ 00:04:52.719 START TEST accel_compare 00:04:52.719 ************************************ 00:04:52.719 19:05:30 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compare -y 00:04:52.719 19:05:30 -- accel/accel.sh@16 -- # local accel_opc 00:04:52.719 19:05:30 -- accel/accel.sh@17 -- # local accel_module 00:04:52.719 19:05:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:04:52.719 19:05:30 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ypDiMF -t 1 -w compare -y 00:04:52.719 [2024-02-14 19:05:30.026368] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:52.719 [2024-02-14 19:05:30.026638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:53.654 EAL: TSC is not safe to use in SMP mode 00:04:53.655 EAL: TSC is not invariant 00:04:53.655 [2024-02-14 19:05:30.807434] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.655 [2024-02-14 19:05:30.938988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.655 [2024-02-14 19:05:30.939103] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:53.655 19:05:30 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.655 19:05:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:53.655 19:05:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.655 19:05:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.655 19:05:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:53.655 19:05:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:53.655 19:05:30 -- accel/accel.sh@41 -- # local IFS=, 00:04:53.655 19:05:30 -- accel/accel.sh@42 -- # jq -r . 00:04:54.590 [2024-02-14 19:05:31.956010] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:54.849 19:05:32 -- accel/accel.sh@18 -- # out=' 00:04:54.849 SPDK Configuration: 00:04:54.849 Core mask: 0x1 00:04:54.849 00:04:54.849 Accel Perf Configuration: 00:04:54.849 Workload Type: compare 00:04:54.849 Transfer size: 4096 bytes 00:04:54.849 Vector count 1 00:04:54.849 Module: software 00:04:54.849 Queue depth: 32 00:04:54.849 Allocate depth: 32 00:04:54.849 # threads/core: 1 00:04:54.849 Run time: 1 seconds 00:04:54.849 Verify: Yes 00:04:54.849 00:04:54.849 Running for 1 seconds... 00:04:54.849 00:04:54.849 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:54.849 ------------------------------------------------------------------------------------ 00:04:54.849 0,0 2651232/s 10356 MiB/s 0 0 00:04:54.849 ==================================================================================== 00:04:54.849 Total 2651232/s 10356 MiB/s 0 0' 00:04:54.849 19:05:32 -- accel/accel.sh@20 -- # IFS=: 00:04:54.849 19:05:32 -- accel/accel.sh@20 -- # read -r var val 00:04:54.849 19:05:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:54.849 19:05:32 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fqO8R3 -t 1 -w compare -y 00:04:54.849 [2024-02-14 19:05:32.174903] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:54.849 [2024-02-14 19:05:32.175178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:55.843 EAL: TSC is not safe to use in SMP mode 00:04:55.843 EAL: TSC is not invariant 00:04:55.843 [2024-02-14 19:05:32.941570] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.843 [2024-02-14 19:05:33.056467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.843 [2024-02-14 19:05:33.056563] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:55.843 19:05:33 -- accel/accel.sh@12 -- # build_accel_config 00:04:55.843 19:05:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:55.843 19:05:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.843 19:05:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.843 19:05:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:55.843 19:05:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:55.843 19:05:33 -- accel/accel.sh@41 -- # local IFS=, 00:04:55.843 19:05:33 -- accel/accel.sh@42 -- # jq -r . 00:04:55.843 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.843 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.843 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.843 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.843 19:05:33 -- accel/accel.sh@21 -- # val=0x1 00:04:55.843 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.843 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.843 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.843 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.843 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.843 19:05:33 -- accel/accel.sh@21 -- # val=compare 00:04:55.843 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.843 19:05:33 -- accel/accel.sh@24 -- # accel_opc=compare 00:04:55.843 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val=software 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@23 -- # accel_module=software 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val=32 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val=32 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val=1 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val=Yes 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:55.844 19:05:33 -- accel/accel.sh@21 -- # val= 00:04:55.844 19:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # IFS=: 00:04:55.844 19:05:33 -- accel/accel.sh@20 -- # read -r var val 00:04:56.781 [2024-02-14 19:05:34.074396] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:57.040 19:05:34 -- accel/accel.sh@21 -- # val= 00:04:57.040 19:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # IFS=: 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # read -r var val 00:04:57.040 19:05:34 -- accel/accel.sh@21 -- # val= 00:04:57.040 19:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # IFS=: 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # read -r var val 00:04:57.040 19:05:34 -- accel/accel.sh@21 -- # val= 00:04:57.040 19:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # IFS=: 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # read -r var val 00:04:57.040 19:05:34 -- accel/accel.sh@21 -- # val= 00:04:57.040 19:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # IFS=: 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # read -r var val 00:04:57.040 19:05:34 -- accel/accel.sh@21 -- # val= 00:04:57.040 19:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # IFS=: 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # read -r var val 00:04:57.040 19:05:34 -- accel/accel.sh@21 -- # val= 00:04:57.040 19:05:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # IFS=: 00:04:57.040 19:05:34 -- accel/accel.sh@20 -- # read -r var val 00:04:57.040 19:05:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:57.040 19:05:34 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:04:57.040 19:05:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.040 00:04:57.040 real 0m4.266s 00:04:57.040 user 0m2.592s 00:04:57.040 sys 0m1.686s 00:04:57.040 19:05:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.040 ************************************ 00:04:57.040 END TEST accel_compare 00:04:57.040 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.040 ************************************ 00:04:57.040 19:05:34 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:57.040 19:05:34 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:04:57.040 19:05:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:57.040 19:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.040 ************************************ 00:04:57.040 START TEST accel_xor 00:04:57.040 ************************************ 00:04:57.040 19:05:34 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y 00:04:57.040 19:05:34 -- accel/accel.sh@16 -- # local accel_opc 00:04:57.040 19:05:34 -- accel/accel.sh@17 -- # local accel_module 00:04:57.040 19:05:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:04:57.040 19:05:34 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NNcNcs -t 1 -w xor -y 00:04:57.040 [2024-02-14 19:05:34.336379] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:57.040 [2024-02-14 19:05:34.336639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:57.977 EAL: TSC is not safe to use in SMP mode 00:04:57.977 EAL: TSC is not invariant 00:04:57.977 [2024-02-14 19:05:35.100127] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.977 [2024-02-14 19:05:35.216102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.977 [2024-02-14 19:05:35.216193] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:04:57.977 19:05:35 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.977 19:05:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.977 19:05:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.977 19:05:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.977 19:05:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.977 19:05:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.977 19:05:35 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.977 19:05:35 -- accel/accel.sh@42 -- # jq -r . 00:04:58.915 [2024-02-14 19:05:36.231759] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:04:59.174 19:05:36 -- accel/accel.sh@18 -- # out=' 00:04:59.174 SPDK Configuration: 00:04:59.174 Core mask: 0x1 00:04:59.174 00:04:59.174 Accel Perf Configuration: 00:04:59.174 Workload Type: xor 00:04:59.174 Source buffers: 2 00:04:59.174 Transfer size: 4096 bytes 00:04:59.174 Vector count 1 00:04:59.174 Module: software 00:04:59.174 Queue depth: 32 00:04:59.174 Allocate depth: 32 00:04:59.174 # threads/core: 1 00:04:59.174 Run time: 1 seconds 00:04:59.174 Verify: Yes 00:04:59.174 00:04:59.174 Running for 1 seconds... 00:04:59.174 00:04:59.174 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:59.174 ------------------------------------------------------------------------------------ 00:04:59.174 0,0 1699392/s 6638 MiB/s 0 0 00:04:59.174 ==================================================================================== 00:04:59.174 Total 1699392/s 6638 MiB/s 0 0' 00:04:59.174 19:05:36 -- accel/accel.sh@20 -- # IFS=: 00:04:59.174 19:05:36 -- accel/accel.sh@20 -- # read -r var val 00:04:59.174 19:05:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:59.174 19:05:36 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.UkXXiv -t 1 -w xor -y 00:04:59.174 [2024-02-14 19:05:36.455453] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:04:59.174 [2024-02-14 19:05:36.455837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:00.110 EAL: TSC is not safe to use in SMP mode 00:05:00.110 EAL: TSC is not invariant 00:05:00.110 [2024-02-14 19:05:37.252728] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.110 [2024-02-14 19:05:37.382800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.110 [2024-02-14 19:05:37.382904] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:00.110 19:05:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.110 19:05:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:00.110 19:05:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.110 19:05:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.110 19:05:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:00.110 19:05:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:00.110 19:05:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:00.110 19:05:37 -- accel/accel.sh@42 -- # jq -r . 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val=0x1 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val=xor 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val=2 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val=software 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.110 19:05:37 -- accel/accel.sh@23 -- # accel_module=software 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.110 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.110 19:05:37 -- accel/accel.sh@21 -- # val=32 00:05:00.110 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.111 19:05:37 -- accel/accel.sh@21 -- # val=32 00:05:00.111 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.111 19:05:37 -- accel/accel.sh@21 -- # val=1 00:05:00.111 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.111 19:05:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:00.111 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.111 19:05:37 -- accel/accel.sh@21 -- # val=Yes 00:05:00.111 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.111 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.111 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:00.111 19:05:37 -- accel/accel.sh@21 -- # val= 00:05:00.111 19:05:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # IFS=: 00:05:00.111 19:05:37 -- accel/accel.sh@20 -- # read -r var val 00:05:01.047 [2024-02-14 19:05:38.397195] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:01.349 19:05:38 -- accel/accel.sh@21 -- # val= 00:05:01.349 19:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:01.349 19:05:38 -- accel/accel.sh@21 -- # val= 00:05:01.349 19:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:01.349 19:05:38 -- accel/accel.sh@21 -- # val= 00:05:01.349 19:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:01.349 19:05:38 -- accel/accel.sh@21 -- # val= 00:05:01.349 19:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:01.349 19:05:38 -- accel/accel.sh@21 -- # val= 00:05:01.349 19:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:01.349 19:05:38 -- accel/accel.sh@21 -- # val= 00:05:01.349 19:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # IFS=: 00:05:01.349 19:05:38 -- accel/accel.sh@20 -- # read -r var val 00:05:01.349 19:05:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:01.349 19:05:38 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:01.349 19:05:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.349 00:05:01.349 real 0m4.280s 00:05:01.349 user 0m2.578s 00:05:01.349 sys 0m1.706s 00:05:01.349 19:05:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.349 19:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.349 ************************************ 00:05:01.349 END TEST accel_xor 00:05:01.349 ************************************ 00:05:01.349 19:05:38 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:01.349 19:05:38 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:01.349 19:05:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:01.349 19:05:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.349 ************************************ 00:05:01.349 START TEST accel_xor 00:05:01.349 ************************************ 00:05:01.349 19:05:38 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y -x 3 00:05:01.350 19:05:38 -- accel/accel.sh@16 -- # local accel_opc 00:05:01.350 19:05:38 -- accel/accel.sh@17 -- # local accel_module 00:05:01.350 19:05:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:01.350 19:05:38 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fAgUAj -t 1 -w xor -y -x 3 00:05:01.350 [2024-02-14 19:05:38.668928] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:01.350 [2024-02-14 19:05:38.669283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:02.286 EAL: TSC is not safe to use in SMP mode 00:05:02.286 EAL: TSC is not invariant 00:05:02.286 [2024-02-14 19:05:39.480497] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.286 [2024-02-14 19:05:39.600179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.286 [2024-02-14 19:05:39.600278] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:02.286 19:05:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.286 19:05:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:02.286 19:05:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.286 19:05:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.286 19:05:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:02.286 19:05:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:02.286 19:05:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:02.286 19:05:39 -- accel/accel.sh@42 -- # jq -r . 00:05:03.230 [2024-02-14 19:05:40.615273] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:03.490 19:05:40 -- accel/accel.sh@18 -- # out=' 00:05:03.490 SPDK Configuration: 00:05:03.490 Core mask: 0x1 00:05:03.490 00:05:03.490 Accel Perf Configuration: 00:05:03.490 Workload Type: xor 00:05:03.490 Source buffers: 3 00:05:03.490 Transfer size: 4096 bytes 00:05:03.490 Vector count 1 00:05:03.490 Module: software 00:05:03.490 Queue depth: 32 00:05:03.490 Allocate depth: 32 00:05:03.490 # threads/core: 1 00:05:03.490 Run time: 1 seconds 00:05:03.490 Verify: Yes 00:05:03.490 00:05:03.490 Running for 1 seconds... 00:05:03.490 00:05:03.490 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:03.491 ------------------------------------------------------------------------------------ 00:05:03.491 0,0 1551392/s 6060 MiB/s 0 0 00:05:03.491 ==================================================================================== 00:05:03.491 Total 1551392/s 6060 MiB/s 0 0' 00:05:03.491 19:05:40 -- accel/accel.sh@20 -- # IFS=: 00:05:03.491 19:05:40 -- accel/accel.sh@20 -- # read -r var val 00:05:03.491 19:05:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:03.491 19:05:40 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.9TkYJC -t 1 -w xor -y -x 3 00:05:03.491 [2024-02-14 19:05:40.836671] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:03.491 [2024-02-14 19:05:40.836969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:04.427 EAL: TSC is not safe to use in SMP mode 00:05:04.427 EAL: TSC is not invariant 00:05:04.427 [2024-02-14 19:05:41.609886] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.427 [2024-02-14 19:05:41.723190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.427 [2024-02-14 19:05:41.723279] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:04.427 19:05:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.427 19:05:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:04.427 19:05:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.427 19:05:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.427 19:05:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:04.427 19:05:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:04.427 19:05:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:04.427 19:05:41 -- accel/accel.sh@42 -- # jq -r . 00:05:04.427 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=0x1 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=xor 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=3 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=software 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@23 -- # accel_module=software 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=32 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=32 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=1 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val=Yes 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:04.428 19:05:41 -- accel/accel.sh@21 -- # val= 00:05:04.428 19:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # IFS=: 00:05:04.428 19:05:41 -- accel/accel.sh@20 -- # read -r var val 00:05:05.365 [2024-02-14 19:05:42.737228] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:05.624 19:05:42 -- accel/accel.sh@21 -- # val= 00:05:05.624 19:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.624 19:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:05.624 19:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:05.624 19:05:42 -- accel/accel.sh@21 -- # val= 00:05:05.625 19:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:05.625 19:05:42 -- accel/accel.sh@21 -- # val= 00:05:05.625 19:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:05.625 19:05:42 -- accel/accel.sh@21 -- # val= 00:05:05.625 19:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:05.625 19:05:42 -- accel/accel.sh@21 -- # val= 00:05:05.625 19:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:05.625 19:05:42 -- accel/accel.sh@21 -- # val= 00:05:05.625 19:05:42 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # IFS=: 00:05:05.625 19:05:42 -- accel/accel.sh@20 -- # read -r var val 00:05:05.625 19:05:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:05.625 19:05:42 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:05.625 19:05:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.625 00:05:05.625 real 0m4.283s 00:05:05.625 user 0m2.595s 00:05:05.625 sys 0m1.698s 00:05:05.625 19:05:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.625 ************************************ 00:05:05.625 END TEST accel_xor 00:05:05.625 ************************************ 00:05:05.625 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.625 19:05:42 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:05.625 19:05:42 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:05.625 19:05:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:05.625 19:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:05.625 ************************************ 00:05:05.625 START TEST accel_dif_verify 00:05:05.625 ************************************ 00:05:05.625 19:05:42 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_verify 00:05:05.625 19:05:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.625 19:05:42 -- accel/accel.sh@17 -- # local accel_module 00:05:05.625 19:05:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:05.625 19:05:42 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.UuQ6qH -t 1 -w dif_verify 00:05:05.625 [2024-02-14 19:05:42.999949] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:05.625 [2024-02-14 19:05:43.000185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:06.563 EAL: TSC is not safe to use in SMP mode 00:05:06.563 EAL: TSC is not invariant 00:05:06.563 [2024-02-14 19:05:43.749041] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.563 [2024-02-14 19:05:43.860918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.563 [2024-02-14 19:05:43.861007] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:06.563 19:05:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.563 19:05:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.563 19:05:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.563 19:05:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.563 19:05:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.563 19:05:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.563 19:05:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.563 19:05:43 -- accel/accel.sh@42 -- # jq -r . 00:05:07.500 [2024-02-14 19:05:44.876733] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:07.759 19:05:45 -- accel/accel.sh@18 -- # out=' 00:05:07.759 SPDK Configuration: 00:05:07.759 Core mask: 0x1 00:05:07.759 00:05:07.759 Accel Perf Configuration: 00:05:07.759 Workload Type: dif_verify 00:05:07.759 Vector size: 4096 bytes 00:05:07.759 Transfer size: 4096 bytes 00:05:07.759 Block size: 512 bytes 00:05:07.759 Metadata size: 8 bytes 00:05:07.759 Vector count 1 00:05:07.759 Module: software 00:05:07.759 Queue depth: 32 00:05:07.759 Allocate depth: 32 00:05:07.759 # threads/core: 1 00:05:07.759 Run time: 1 seconds 00:05:07.759 Verify: No 00:05:07.759 00:05:07.759 Running for 1 seconds... 00:05:07.759 00:05:07.759 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:07.759 ------------------------------------------------------------------------------------ 00:05:07.759 0,0 1213728/s 4815 MiB/s 0 0 00:05:07.759 ==================================================================================== 00:05:07.759 Total 1213728/s 4741 MiB/s 0 0' 00:05:07.759 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:07.759 19:05:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:07.759 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:07.759 19:05:45 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.x5s9gy -t 1 -w dif_verify 00:05:07.759 [2024-02-14 19:05:45.090711] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:07.759 [2024-02-14 19:05:45.091086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:08.693 EAL: TSC is not safe to use in SMP mode 00:05:08.693 EAL: TSC is not invariant 00:05:08.693 [2024-02-14 19:05:45.859341] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.693 [2024-02-14 19:05:45.971527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.693 [2024-02-14 19:05:45.971630] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:08.693 19:05:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.693 19:05:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.693 19:05:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.693 19:05:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.693 19:05:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.693 19:05:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.693 19:05:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.693 19:05:45 -- accel/accel.sh@42 -- # jq -r . 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=0x1 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=dif_verify 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=software 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@23 -- # accel_module=software 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=32 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=32 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=1 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val=No 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:08.693 19:05:45 -- accel/accel.sh@21 -- # val= 00:05:08.693 19:05:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # IFS=: 00:05:08.693 19:05:45 -- accel/accel.sh@20 -- # read -r var val 00:05:09.632 [2024-02-14 19:05:46.987581] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:09.891 19:05:47 -- accel/accel.sh@21 -- # val= 00:05:09.891 19:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:09.891 19:05:47 -- accel/accel.sh@21 -- # val= 00:05:09.891 19:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:09.891 19:05:47 -- accel/accel.sh@21 -- # val= 00:05:09.891 19:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:09.891 19:05:47 -- accel/accel.sh@21 -- # val= 00:05:09.891 19:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:09.891 19:05:47 -- accel/accel.sh@21 -- # val= 00:05:09.891 19:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:09.891 19:05:47 -- accel/accel.sh@21 -- # val= 00:05:09.891 19:05:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # IFS=: 00:05:09.891 19:05:47 -- accel/accel.sh@20 -- # read -r var val 00:05:09.891 19:05:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:09.891 19:05:47 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:09.891 19:05:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.891 00:05:09.891 real 0m4.203s 00:05:09.891 user 0m2.560s 00:05:09.891 sys 0m1.654s 00:05:09.891 19:05:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.891 19:05:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.891 ************************************ 00:05:09.891 END TEST accel_dif_verify 00:05:09.891 ************************************ 00:05:09.891 19:05:47 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:09.891 19:05:47 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:09.891 19:05:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:09.891 19:05:47 -- common/autotest_common.sh@10 -- # set +x 00:05:09.891 ************************************ 00:05:09.891 START TEST accel_dif_generate 00:05:09.891 ************************************ 00:05:09.891 19:05:47 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate 00:05:09.891 19:05:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.891 19:05:47 -- accel/accel.sh@17 -- # local accel_module 00:05:09.891 19:05:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:09.891 19:05:47 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.C8uHTM -t 1 -w dif_generate 00:05:09.891 [2024-02-14 19:05:47.244377] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:09.892 [2024-02-14 19:05:47.244642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:10.829 EAL: TSC is not safe to use in SMP mode 00:05:10.829 EAL: TSC is not invariant 00:05:10.829 [2024-02-14 19:05:47.984729] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.829 [2024-02-14 19:05:48.096567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.829 [2024-02-14 19:05:48.096665] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:10.829 19:05:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.829 19:05:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.829 19:05:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.829 19:05:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.829 19:05:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.829 19:05:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.829 19:05:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.829 19:05:48 -- accel/accel.sh@42 -- # jq -r . 00:05:11.765 [2024-02-14 19:05:49.115140] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:12.023 19:05:49 -- accel/accel.sh@18 -- # out=' 00:05:12.023 SPDK Configuration: 00:05:12.023 Core mask: 0x1 00:05:12.023 00:05:12.023 Accel Perf Configuration: 00:05:12.023 Workload Type: dif_generate 00:05:12.023 Vector size: 4096 bytes 00:05:12.023 Transfer size: 4096 bytes 00:05:12.023 Block size: 512 bytes 00:05:12.023 Metadata size: 8 bytes 00:05:12.023 Vector count 1 00:05:12.023 Module: software 00:05:12.023 Queue depth: 32 00:05:12.023 Allocate depth: 32 00:05:12.023 # threads/core: 1 00:05:12.023 Run time: 1 seconds 00:05:12.023 Verify: No 00:05:12.023 00:05:12.023 Running for 1 seconds... 00:05:12.023 00:05:12.023 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:12.023 ------------------------------------------------------------------------------------ 00:05:12.023 0,0 1410112/s 5594 MiB/s 0 0 00:05:12.023 ==================================================================================== 00:05:12.023 Total 1410112/s 5508 MiB/s 0 0' 00:05:12.023 19:05:49 -- accel/accel.sh@20 -- # IFS=: 00:05:12.023 19:05:49 -- accel/accel.sh@20 -- # read -r var val 00:05:12.023 19:05:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:12.023 19:05:49 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pF4X1W -t 1 -w dif_generate 00:05:12.023 [2024-02-14 19:05:49.330838] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:12.023 [2024-02-14 19:05:49.331154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:12.958 EAL: TSC is not safe to use in SMP mode 00:05:12.958 EAL: TSC is not invariant 00:05:12.958 [2024-02-14 19:05:50.104860] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.958 [2024-02-14 19:05:50.218702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.958 [2024-02-14 19:05:50.218807] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:12.958 19:05:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.958 19:05:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.958 19:05:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.958 19:05:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.958 19:05:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.958 19:05:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.958 19:05:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.958 19:05:50 -- accel/accel.sh@42 -- # jq -r . 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val=0x1 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val=dif_generate 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.958 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.958 19:05:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:12.958 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val=software 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@23 -- # accel_module=software 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val=32 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val=32 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val=1 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val=No 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:12.959 19:05:50 -- accel/accel.sh@21 -- # val= 00:05:12.959 19:05:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # IFS=: 00:05:12.959 19:05:50 -- accel/accel.sh@20 -- # read -r var val 00:05:13.927 [2024-02-14 19:05:51.232574] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:14.256 19:05:51 -- accel/accel.sh@21 -- # val= 00:05:14.256 19:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:14.256 19:05:51 -- accel/accel.sh@21 -- # val= 00:05:14.256 19:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:14.256 19:05:51 -- accel/accel.sh@21 -- # val= 00:05:14.256 19:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:14.256 19:05:51 -- accel/accel.sh@21 -- # val= 00:05:14.256 19:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:14.256 19:05:51 -- accel/accel.sh@21 -- # val= 00:05:14.256 19:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:14.256 19:05:51 -- accel/accel.sh@21 -- # val= 00:05:14.256 19:05:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # IFS=: 00:05:14.256 19:05:51 -- accel/accel.sh@20 -- # read -r var val 00:05:14.256 19:05:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:14.256 19:05:51 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:14.256 19:05:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.256 00:05:14.256 real 0m4.205s 00:05:14.256 user 0m2.546s 00:05:14.256 sys 0m1.677s 00:05:14.256 19:05:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.256 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.256 ************************************ 00:05:14.256 END TEST accel_dif_generate 00:05:14.256 ************************************ 00:05:14.256 19:05:51 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:14.256 19:05:51 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:05:14.256 19:05:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:14.256 19:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:14.256 ************************************ 00:05:14.256 START TEST accel_dif_generate_copy 00:05:14.256 ************************************ 00:05:14.256 19:05:51 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate_copy 00:05:14.256 19:05:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.256 19:05:51 -- accel/accel.sh@17 -- # local accel_module 00:05:14.256 19:05:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:14.256 19:05:51 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Sxpnp2 -t 1 -w dif_generate_copy 00:05:14.256 [2024-02-14 19:05:51.496739] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:14.256 [2024-02-14 19:05:51.497111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:15.193 EAL: TSC is not safe to use in SMP mode 00:05:15.193 EAL: TSC is not invariant 00:05:15.193 [2024-02-14 19:05:52.253580] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.193 [2024-02-14 19:05:52.365803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.193 [2024-02-14 19:05:52.365886] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:15.193 19:05:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.193 19:05:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.193 19:05:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.193 19:05:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.193 19:05:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.193 19:05:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.193 19:05:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.193 19:05:52 -- accel/accel.sh@42 -- # jq -r . 00:05:16.131 [2024-02-14 19:05:53.379203] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:16.391 19:05:53 -- accel/accel.sh@18 -- # out=' 00:05:16.391 SPDK Configuration: 00:05:16.391 Core mask: 0x1 00:05:16.391 00:05:16.391 Accel Perf Configuration: 00:05:16.391 Workload Type: dif_generate_copy 00:05:16.391 Vector size: 4096 bytes 00:05:16.391 Transfer size: 4096 bytes 00:05:16.391 Vector count 1 00:05:16.391 Module: software 00:05:16.391 Queue depth: 32 00:05:16.391 Allocate depth: 32 00:05:16.391 # threads/core: 1 00:05:16.391 Run time: 1 seconds 00:05:16.391 Verify: No 00:05:16.391 00:05:16.391 Running for 1 seconds... 00:05:16.391 00:05:16.391 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:16.391 ------------------------------------------------------------------------------------ 00:05:16.391 0,0 1128992/s 4479 MiB/s 0 0 00:05:16.391 ==================================================================================== 00:05:16.391 Total 1128992/s 4410 MiB/s 0 0' 00:05:16.391 19:05:53 -- accel/accel.sh@20 -- # IFS=: 00:05:16.391 19:05:53 -- accel/accel.sh@20 -- # read -r var val 00:05:16.391 19:05:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:16.391 19:05:53 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.fbG5FK -t 1 -w dif_generate_copy 00:05:16.391 [2024-02-14 19:05:53.585261] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:16.391 [2024-02-14 19:05:53.585453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:16.958 EAL: TSC is not safe to use in SMP mode 00:05:16.958 EAL: TSC is not invariant 00:05:16.958 [2024-02-14 19:05:54.370695] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.218 [2024-02-14 19:05:54.489047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.218 [2024-02-14 19:05:54.489165] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:17.218 19:05:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.218 19:05:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.218 19:05:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.218 19:05:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.218 19:05:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.218 19:05:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.218 19:05:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.218 19:05:54 -- accel/accel.sh@42 -- # jq -r . 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=0x1 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=software 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@23 -- # accel_module=software 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=32 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=32 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=1 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val=No 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:17.218 19:05:54 -- accel/accel.sh@21 -- # val= 00:05:17.218 19:05:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # IFS=: 00:05:17.218 19:05:54 -- accel/accel.sh@20 -- # read -r var val 00:05:18.155 [2024-02-14 19:05:55.505515] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:18.415 19:05:55 -- accel/accel.sh@21 -- # val= 00:05:18.415 19:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # IFS=: 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # read -r var val 00:05:18.415 19:05:55 -- accel/accel.sh@21 -- # val= 00:05:18.415 19:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # IFS=: 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # read -r var val 00:05:18.415 19:05:55 -- accel/accel.sh@21 -- # val= 00:05:18.415 19:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # IFS=: 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # read -r var val 00:05:18.415 19:05:55 -- accel/accel.sh@21 -- # val= 00:05:18.415 19:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # IFS=: 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # read -r var val 00:05:18.415 19:05:55 -- accel/accel.sh@21 -- # val= 00:05:18.415 19:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # IFS=: 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # read -r var val 00:05:18.415 19:05:55 -- accel/accel.sh@21 -- # val= 00:05:18.415 19:05:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # IFS=: 00:05:18.415 19:05:55 -- accel/accel.sh@20 -- # read -r var val 00:05:18.415 19:05:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:18.415 19:05:55 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:18.415 19:05:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.415 00:05:18.415 real 0m4.226s 00:05:18.415 user 0m2.528s 00:05:18.415 sys 0m1.711s 00:05:18.415 19:05:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.415 19:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.415 ************************************ 00:05:18.415 END TEST accel_dif_generate_copy 00:05:18.415 ************************************ 00:05:18.415 19:05:55 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:18.415 19:05:55 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.415 19:05:55 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:05:18.415 19:05:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:18.415 19:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.415 ************************************ 00:05:18.415 START TEST accel_comp 00:05:18.415 ************************************ 00:05:18.415 19:05:55 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.415 19:05:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:18.415 19:05:55 -- accel/accel.sh@17 -- # local accel_module 00:05:18.415 19:05:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.415 19:05:55 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ox9KMP -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.415 [2024-02-14 19:05:55.758437] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:18.415 [2024-02-14 19:05:55.758672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:19.353 EAL: TSC is not safe to use in SMP mode 00:05:19.353 EAL: TSC is not invariant 00:05:19.353 [2024-02-14 19:05:56.523055] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.353 [2024-02-14 19:05:56.635156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.354 [2024-02-14 19:05:56.635249] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:19.354 19:05:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.354 19:05:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.354 19:05:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.354 19:05:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.354 19:05:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.354 19:05:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.354 19:05:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.354 19:05:56 -- accel/accel.sh@42 -- # jq -r . 00:05:20.291 [2024-02-14 19:05:57.653074] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:20.550 19:05:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:20.550 00:05:20.550 SPDK Configuration: 00:05:20.550 Core mask: 0x1 00:05:20.550 00:05:20.550 Accel Perf Configuration: 00:05:20.550 Workload Type: compress 00:05:20.550 Transfer size: 4096 bytes 00:05:20.550 Vector count 1 00:05:20.550 Module: software 00:05:20.550 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:20.550 Queue depth: 32 00:05:20.550 Allocate depth: 32 00:05:20.550 # threads/core: 1 00:05:20.550 Run time: 1 seconds 00:05:20.550 Verify: No 00:05:20.550 00:05:20.550 Running for 1 seconds... 00:05:20.550 00:05:20.550 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:20.550 ------------------------------------------------------------------------------------ 00:05:20.550 0,0 58592/s 244 MiB/s 0 0 00:05:20.550 ==================================================================================== 00:05:20.550 Total 58592/s 228 MiB/s 0 0' 00:05:20.550 19:05:57 -- accel/accel.sh@20 -- # IFS=: 00:05:20.550 19:05:57 -- accel/accel.sh@20 -- # read -r var val 00:05:20.550 19:05:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:20.550 19:05:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.kEwsM4 -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:20.550 [2024-02-14 19:05:57.863597] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:20.550 [2024-02-14 19:05:57.863774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:21.488 EAL: TSC is not safe to use in SMP mode 00:05:21.488 EAL: TSC is not invariant 00:05:21.488 [2024-02-14 19:05:58.594371] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.488 [2024-02-14 19:05:58.705213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.488 [2024-02-14 19:05:58.705274] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:21.488 19:05:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.488 19:05:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.488 19:05:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.488 19:05:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.488 19:05:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.488 19:05:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.488 19:05:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.488 19:05:58 -- accel/accel.sh@42 -- # jq -r . 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=0x1 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=compress 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=software 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=32 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=32 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=1 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val=No 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:21.488 19:05:58 -- accel/accel.sh@21 -- # val= 00:05:21.488 19:05:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # IFS=: 00:05:21.488 19:05:58 -- accel/accel.sh@20 -- # read -r var val 00:05:22.424 [2024-02-14 19:05:59.719246] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:22.683 19:05:59 -- accel/accel.sh@21 -- # val= 00:05:22.683 19:05:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # IFS=: 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # read -r var val 00:05:22.683 19:05:59 -- accel/accel.sh@21 -- # val= 00:05:22.683 19:05:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # IFS=: 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # read -r var val 00:05:22.683 19:05:59 -- accel/accel.sh@21 -- # val= 00:05:22.683 19:05:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # IFS=: 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # read -r var val 00:05:22.683 19:05:59 -- accel/accel.sh@21 -- # val= 00:05:22.683 19:05:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # IFS=: 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # read -r var val 00:05:22.683 19:05:59 -- accel/accel.sh@21 -- # val= 00:05:22.683 19:05:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # IFS=: 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # read -r var val 00:05:22.683 19:05:59 -- accel/accel.sh@21 -- # val= 00:05:22.683 19:05:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # IFS=: 00:05:22.683 19:05:59 -- accel/accel.sh@20 -- # read -r var val 00:05:22.683 19:05:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.683 19:05:59 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:22.683 19:05:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.683 00:05:22.683 real 0m4.166s 00:05:22.683 user 0m2.528s 00:05:22.683 sys 0m1.645s 00:05:22.683 19:05:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.683 ************************************ 00:05:22.683 19:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.683 END TEST accel_comp 00:05:22.683 ************************************ 00:05:22.683 19:05:59 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.683 19:05:59 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:05:22.683 19:05:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:22.683 19:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:22.683 ************************************ 00:05:22.683 START TEST accel_decomp 00:05:22.683 ************************************ 00:05:22.683 19:05:59 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.683 19:05:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.683 19:05:59 -- accel/accel.sh@17 -- # local accel_module 00:05:22.683 19:05:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.683 19:05:59 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.j3ywDo -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:22.683 [2024-02-14 19:05:59.967423] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:22.683 [2024-02-14 19:05:59.967606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:23.618 EAL: TSC is not safe to use in SMP mode 00:05:23.618 EAL: TSC is not invariant 00:05:23.618 [2024-02-14 19:06:00.707398] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.618 [2024-02-14 19:06:00.817268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.618 [2024-02-14 19:06:00.817327] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:23.618 19:06:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.618 19:06:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.618 19:06:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.618 19:06:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.618 19:06:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.618 19:06:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.618 19:06:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.618 19:06:00 -- accel/accel.sh@42 -- # jq -r . 00:05:24.551 [2024-02-14 19:06:01.835445] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:24.810 19:06:02 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:24.810 00:05:24.810 SPDK Configuration: 00:05:24.810 Core mask: 0x1 00:05:24.810 00:05:24.810 Accel Perf Configuration: 00:05:24.810 Workload Type: decompress 00:05:24.810 Transfer size: 4096 bytes 00:05:24.810 Vector count 1 00:05:24.810 Module: software 00:05:24.810 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:24.810 Queue depth: 32 00:05:24.810 Allocate depth: 32 00:05:24.810 # threads/core: 1 00:05:24.810 Run time: 1 seconds 00:05:24.810 Verify: Yes 00:05:24.810 00:05:24.810 Running for 1 seconds... 00:05:24.810 00:05:24.810 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:24.810 ------------------------------------------------------------------------------------ 00:05:24.810 0,0 91552/s 168 MiB/s 0 0 00:05:24.810 ==================================================================================== 00:05:24.810 Total 91552/s 357 MiB/s 0 0' 00:05:24.810 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:24.810 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:24.810 19:06:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.810 19:06:02 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.4b7c93 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:24.810 [2024-02-14 19:06:02.038021] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:24.810 [2024-02-14 19:06:02.038180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:25.376 EAL: TSC is not safe to use in SMP mode 00:05:25.376 EAL: TSC is not invariant 00:05:25.376 [2024-02-14 19:06:02.754546] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.634 [2024-02-14 19:06:02.861022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.634 [2024-02-14 19:06:02.861073] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:25.634 19:06:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.634 19:06:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.634 19:06:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.634 19:06:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.634 19:06:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.634 19:06:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.634 19:06:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.634 19:06:02 -- accel/accel.sh@42 -- # jq -r . 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=0x1 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=decompress 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=software 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@23 -- # accel_module=software 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=32 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=32 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=1 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val=Yes 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:25.634 19:06:02 -- accel/accel.sh@21 -- # val= 00:05:25.634 19:06:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # IFS=: 00:05:25.634 19:06:02 -- accel/accel.sh@20 -- # read -r var val 00:05:26.569 [2024-02-14 19:06:03.880265] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:26.827 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:26.827 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:26.827 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:26.827 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:26.827 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:26.827 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:26.827 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:26.827 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:26.827 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:26.827 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:26.827 19:06:04 -- accel/accel.sh@21 -- # val= 00:05:26.827 19:06:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # IFS=: 00:05:26.827 19:06:04 -- accel/accel.sh@20 -- # read -r var val 00:05:26.827 19:06:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:26.827 19:06:04 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:26.827 19:06:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.827 00:05:26.827 real 0m4.123s 00:05:26.827 user 0m2.570s 00:05:26.827 sys 0m1.562s 00:05:26.827 19:06:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.827 19:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 ************************************ 00:05:26.827 END TEST accel_decomp 00:05:26.827 ************************************ 00:05:26.827 19:06:04 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:26.827 19:06:04 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:05:26.827 19:06:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:26.827 19:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.827 ************************************ 00:05:26.827 START TEST accel_decmop_full 00:05:26.827 ************************************ 00:05:26.827 19:06:04 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:26.827 19:06:04 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.827 19:06:04 -- accel/accel.sh@17 -- # local accel_module 00:05:26.827 19:06:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:26.827 19:06:04 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bJVXBy -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:26.827 [2024-02-14 19:06:04.129028] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:26.827 [2024-02-14 19:06:04.129343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:27.761 EAL: TSC is not safe to use in SMP mode 00:05:27.761 EAL: TSC is not invariant 00:05:27.761 [2024-02-14 19:06:04.865281] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.761 [2024-02-14 19:06:04.976129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.761 [2024-02-14 19:06:04.976184] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:27.761 19:06:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.761 19:06:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.761 19:06:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.762 19:06:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.762 19:06:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.762 19:06:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.762 19:06:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.762 19:06:04 -- accel/accel.sh@42 -- # jq -r . 00:05:28.749 [2024-02-14 19:06:06.003551] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:29.007 19:06:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:29.007 00:05:29.007 SPDK Configuration: 00:05:29.007 Core mask: 0x1 00:05:29.007 00:05:29.007 Accel Perf Configuration: 00:05:29.007 Workload Type: decompress 00:05:29.007 Transfer size: 111250 bytes 00:05:29.007 Vector count 1 00:05:29.007 Module: software 00:05:29.007 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:29.007 Queue depth: 32 00:05:29.007 Allocate depth: 32 00:05:29.007 # threads/core: 1 00:05:29.007 Run time: 1 seconds 00:05:29.007 Verify: Yes 00:05:29.007 00:05:29.007 Running for 1 seconds... 00:05:29.007 00:05:29.007 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:29.007 ------------------------------------------------------------------------------------ 00:05:29.007 0,0 4864/s 200 MiB/s 0 0 00:05:29.007 ==================================================================================== 00:05:29.007 Total 4864/s 516 MiB/s 0 0' 00:05:29.007 19:06:06 -- accel/accel.sh@20 -- # IFS=: 00:05:29.007 19:06:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:29.007 19:06:06 -- accel/accel.sh@20 -- # read -r var val 00:05:29.007 19:06:06 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.rxDRDR -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:29.007 [2024-02-14 19:06:06.208898] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:29.007 [2024-02-14 19:06:06.209144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:29.573 EAL: TSC is not safe to use in SMP mode 00:05:29.573 EAL: TSC is not invariant 00:05:29.573 [2024-02-14 19:06:06.935702] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.832 [2024-02-14 19:06:07.044839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.832 [2024-02-14 19:06:07.044914] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:29.832 19:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.832 19:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.832 19:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.832 19:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.832 19:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.832 19:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.832 19:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.832 19:06:07 -- accel/accel.sh@42 -- # jq -r . 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=0x1 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=decompress 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=software 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=32 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=32 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=1 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val=Yes 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:29.832 19:06:07 -- accel/accel.sh@21 -- # val= 00:05:29.832 19:06:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # IFS=: 00:05:29.832 19:06:07 -- accel/accel.sh@20 -- # read -r var val 00:05:30.768 [2024-02-14 19:06:08.072534] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:31.027 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:31.027 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:31.027 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:31.027 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:31.027 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:31.027 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:31.027 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:31.027 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:31.027 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:31.027 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:31.028 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.028 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:31.028 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:31.028 19:06:08 -- accel/accel.sh@21 -- # val= 00:05:31.028 19:06:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.028 19:06:08 -- accel/accel.sh@20 -- # IFS=: 00:05:31.028 19:06:08 -- accel/accel.sh@20 -- # read -r var val 00:05:31.028 19:06:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:31.028 19:06:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:31.028 19:06:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.028 00:05:31.028 real 0m4.156s 00:05:31.028 user 0m2.570s 00:05:31.028 sys 0m1.597s 00:05:31.028 19:06:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.028 19:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.028 ************************************ 00:05:31.028 END TEST accel_decmop_full 00:05:31.028 ************************************ 00:05:31.028 19:06:08 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:31.028 19:06:08 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:05:31.028 19:06:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:31.028 19:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:31.028 ************************************ 00:05:31.028 START TEST accel_decomp_mcore 00:05:31.028 ************************************ 00:05:31.028 19:06:08 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:31.028 19:06:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.028 19:06:08 -- accel/accel.sh@17 -- # local accel_module 00:05:31.028 19:06:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:31.028 19:06:08 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PG8mTT -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:31.028 [2024-02-14 19:06:08.322929] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:31.028 [2024-02-14 19:06:08.323119] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:31.966 EAL: TSC is not safe to use in SMP mode 00:05:31.966 EAL: TSC is not invariant 00:05:31.966 [2024-02-14 19:06:09.068063] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.966 [2024-02-14 19:06:09.179647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.966 [2024-02-14 19:06:09.179991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.966 [2024-02-14 19:06:09.179808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.966 [2024-02-14 19:06:09.180037] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:31.966 [2024-02-14 19:06:09.179895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.966 19:06:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.966 19:06:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.966 19:06:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.966 19:06:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.966 19:06:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.966 19:06:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.966 19:06:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.966 19:06:09 -- accel/accel.sh@42 -- # jq -r . 00:05:32.901 [2024-02-14 19:06:10.199368] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:33.159 19:06:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:33.159 00:05:33.159 SPDK Configuration: 00:05:33.159 Core mask: 0xf 00:05:33.159 00:05:33.159 Accel Perf Configuration: 00:05:33.159 Workload Type: decompress 00:05:33.159 Transfer size: 4096 bytes 00:05:33.159 Vector count 1 00:05:33.159 Module: software 00:05:33.159 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:33.159 Queue depth: 32 00:05:33.159 Allocate depth: 32 00:05:33.159 # threads/core: 1 00:05:33.159 Run time: 1 seconds 00:05:33.159 Verify: Yes 00:05:33.159 00:05:33.159 Running for 1 seconds... 00:05:33.159 00:05:33.159 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:33.159 ------------------------------------------------------------------------------------ 00:05:33.159 0,0 87200/s 160 MiB/s 0 0 00:05:33.159 3,0 89568/s 165 MiB/s 0 0 00:05:33.159 2,0 88704/s 163 MiB/s 0 0 00:05:33.159 1,0 83264/s 153 MiB/s 0 0 00:05:33.159 ==================================================================================== 00:05:33.159 Total 348736/s 1362 MiB/s 0 0' 00:05:33.159 19:06:10 -- accel/accel.sh@20 -- # IFS=: 00:05:33.159 19:06:10 -- accel/accel.sh@20 -- # read -r var val 00:05:33.159 19:06:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:33.160 19:06:10 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.TrbQHp -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:33.160 [2024-02-14 19:06:10.404755] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:33.160 [2024-02-14 19:06:10.404961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:33.727 EAL: TSC is not safe to use in SMP mode 00:05:33.727 EAL: TSC is not invariant 00:05:33.727 [2024-02-14 19:06:11.135469] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.986 [2024-02-14 19:06:11.246971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.986 [2024-02-14 19:06:11.247227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.986 [2024-02-14 19:06:11.247088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.986 [2024-02-14 19:06:11.247221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.986 [2024-02-14 19:06:11.247459] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:33.986 19:06:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.986 19:06:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.986 19:06:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.986 19:06:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.986 19:06:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.986 19:06:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.986 19:06:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.986 19:06:11 -- accel/accel.sh@42 -- # jq -r . 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=0xf 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=decompress 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=software 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@23 -- # accel_module=software 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=32 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=32 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=1 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val=Yes 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:33.986 19:06:11 -- accel/accel.sh@21 -- # val= 00:05:33.986 19:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # IFS=: 00:05:33.986 19:06:11 -- accel/accel.sh@20 -- # read -r var val 00:05:34.922 [2024-02-14 19:06:12.261086] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@21 -- # val= 00:05:35.182 19:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # IFS=: 00:05:35.182 19:06:12 -- accel/accel.sh@20 -- # read -r var val 00:05:35.182 19:06:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:35.182 19:06:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:35.182 19:06:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.182 00:05:35.182 real 0m4.152s 00:05:35.182 user 0m8.978s 00:05:35.182 sys 0m1.614s 00:05:35.182 19:06:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.182 19:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 ************************************ 00:05:35.182 END TEST accel_decomp_mcore 00:05:35.182 ************************************ 00:05:35.182 19:06:12 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:35.182 19:06:12 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:05:35.182 19:06:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:35.182 19:06:12 -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 ************************************ 00:05:35.182 START TEST accel_decomp_full_mcore 00:05:35.182 ************************************ 00:05:35.182 19:06:12 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:35.182 19:06:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.182 19:06:12 -- accel/accel.sh@17 -- # local accel_module 00:05:35.182 19:06:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:35.182 19:06:12 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ath4Ub -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:35.182 [2024-02-14 19:06:12.515890] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:35.182 [2024-02-14 19:06:12.516084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:36.120 EAL: TSC is not safe to use in SMP mode 00:05:36.120 EAL: TSC is not invariant 00:05:36.120 [2024-02-14 19:06:13.257794] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.120 [2024-02-14 19:06:13.369505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.120 [2024-02-14 19:06:13.369619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.120 [2024-02-14 19:06:13.369781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.120 [2024-02-14 19:06:13.369777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.120 [2024-02-14 19:06:13.369985] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:36.120 19:06:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.120 19:06:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.120 19:06:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.120 19:06:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.120 19:06:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.120 19:06:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.120 19:06:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.120 19:06:13 -- accel/accel.sh@42 -- # jq -r . 00:05:37.093 [2024-02-14 19:06:14.398925] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:37.353 19:06:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:37.353 00:05:37.353 SPDK Configuration: 00:05:37.353 Core mask: 0xf 00:05:37.353 00:05:37.353 Accel Perf Configuration: 00:05:37.353 Workload Type: decompress 00:05:37.353 Transfer size: 111250 bytes 00:05:37.353 Vector count 1 00:05:37.353 Module: software 00:05:37.353 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:37.353 Queue depth: 32 00:05:37.353 Allocate depth: 32 00:05:37.353 # threads/core: 1 00:05:37.353 Run time: 1 seconds 00:05:37.353 Verify: Yes 00:05:37.353 00:05:37.353 Running for 1 seconds... 00:05:37.353 00:05:37.353 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.353 ------------------------------------------------------------------------------------ 00:05:37.353 0,0 4704/s 194 MiB/s 0 0 00:05:37.353 3,0 5088/s 210 MiB/s 0 0 00:05:37.353 2,0 5056/s 208 MiB/s 0 0 00:05:37.353 1,0 5056/s 208 MiB/s 0 0 00:05:37.353 ==================================================================================== 00:05:37.353 Total 19904/s 2111 MiB/s 0 0' 00:05:37.353 19:06:14 -- accel/accel.sh@20 -- # IFS=: 00:05:37.353 19:06:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:37.353 19:06:14 -- accel/accel.sh@20 -- # read -r var val 00:05:37.353 19:06:14 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.648VGO -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:37.353 [2024-02-14 19:06:14.607733] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:37.353 [2024-02-14 19:06:14.607940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:38.289 EAL: TSC is not safe to use in SMP mode 00:05:38.289 EAL: TSC is not invariant 00:05:38.289 [2024-02-14 19:06:15.359965] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:38.289 [2024-02-14 19:06:15.471494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.289 [2024-02-14 19:06:15.471716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.289 [2024-02-14 19:06:15.471606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.289 [2024-02-14 19:06:15.471710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.289 [2024-02-14 19:06:15.471902] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:38.289 19:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.289 19:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.289 19:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.289 19:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.289 19:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.289 19:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.289 19:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.289 19:06:15 -- accel/accel.sh@42 -- # jq -r . 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val=0xf 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val=decompress 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val=software 00:05:38.289 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.289 19:06:15 -- accel/accel.sh@23 -- # accel_module=software 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.289 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.289 19:06:15 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val=32 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val=32 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val=1 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val=Yes 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:38.290 19:06:15 -- accel/accel.sh@21 -- # val= 00:05:38.290 19:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # IFS=: 00:05:38.290 19:06:15 -- accel/accel.sh@20 -- # read -r var val 00:05:39.227 [2024-02-14 19:06:16.497892] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@21 -- # val= 00:05:39.487 19:06:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # IFS=: 00:05:39.487 19:06:16 -- accel/accel.sh@20 -- # read -r var val 00:05:39.487 19:06:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:39.487 19:06:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:39.487 19:06:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.487 00:05:39.487 real 0m4.193s 00:05:39.487 user 0m9.066s 00:05:39.487 sys 0m1.627s 00:05:39.487 19:06:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.487 19:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.487 ************************************ 00:05:39.487 END TEST accel_decomp_full_mcore 00:05:39.487 ************************************ 00:05:39.487 19:06:16 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:39.487 19:06:16 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:05:39.487 19:06:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:39.487 19:06:16 -- common/autotest_common.sh@10 -- # set +x 00:05:39.487 ************************************ 00:05:39.487 START TEST accel_decomp_mthread 00:05:39.487 ************************************ 00:05:39.487 19:06:16 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:39.487 19:06:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.487 19:06:16 -- accel/accel.sh@17 -- # local accel_module 00:05:39.488 19:06:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:39.488 19:06:16 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.u9Zt9l -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:39.488 [2024-02-14 19:06:16.741327] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:39.488 [2024-02-14 19:06:16.741516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:40.055 EAL: TSC is not safe to use in SMP mode 00:05:40.055 EAL: TSC is not invariant 00:05:40.315 [2024-02-14 19:06:17.477966] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.315 [2024-02-14 19:06:17.590360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.315 [2024-02-14 19:06:17.590455] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:40.315 19:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.315 19:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.315 19:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.315 19:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.315 19:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.315 19:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.315 19:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.315 19:06:17 -- accel/accel.sh@42 -- # jq -r . 00:05:41.251 [2024-02-14 19:06:18.610984] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:41.510 19:06:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:41.510 00:05:41.510 SPDK Configuration: 00:05:41.510 Core mask: 0x1 00:05:41.510 00:05:41.510 Accel Perf Configuration: 00:05:41.510 Workload Type: decompress 00:05:41.510 Transfer size: 4096 bytes 00:05:41.510 Vector count 1 00:05:41.510 Module: software 00:05:41.510 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:41.510 Queue depth: 32 00:05:41.510 Allocate depth: 32 00:05:41.510 # threads/core: 2 00:05:41.510 Run time: 1 seconds 00:05:41.510 Verify: Yes 00:05:41.510 00:05:41.510 Running for 1 seconds... 00:05:41.510 00:05:41.510 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:41.510 ------------------------------------------------------------------------------------ 00:05:41.510 0,1 46752/s 86 MiB/s 0 0 00:05:41.510 0,0 46656/s 85 MiB/s 0 0 00:05:41.510 ==================================================================================== 00:05:41.510 Total 93408/s 364 MiB/s 0 0' 00:05:41.510 19:06:18 -- accel/accel.sh@20 -- # IFS=: 00:05:41.510 19:06:18 -- accel/accel.sh@20 -- # read -r var val 00:05:41.510 19:06:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:41.510 19:06:18 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.hBMBJm -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:41.510 [2024-02-14 19:06:18.818193] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:41.510 [2024-02-14 19:06:18.818592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:42.447 EAL: TSC is not safe to use in SMP mode 00:05:42.447 EAL: TSC is not invariant 00:05:42.447 [2024-02-14 19:06:19.563160] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.447 [2024-02-14 19:06:19.673704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.447 [2024-02-14 19:06:19.673787] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:42.447 19:06:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.447 19:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.447 19:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.447 19:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.447 19:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.447 19:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.447 19:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.447 19:06:19 -- accel/accel.sh@42 -- # jq -r . 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=0x1 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=decompress 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=software 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=32 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=32 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=2 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val=Yes 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:42.447 19:06:19 -- accel/accel.sh@21 -- # val= 00:05:42.447 19:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # IFS=: 00:05:42.447 19:06:19 -- accel/accel.sh@20 -- # read -r var val 00:05:43.384 [2024-02-14 19:06:20.693850] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@21 -- # val= 00:05:43.644 19:06:20 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # IFS=: 00:05:43.644 19:06:20 -- accel/accel.sh@20 -- # read -r var val 00:05:43.644 19:06:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.644 19:06:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:43.644 19:06:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.644 00:05:43.644 real 0m4.159s 00:05:43.644 user 0m2.552s 00:05:43.644 sys 0m1.615s 00:05:43.644 19:06:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.644 19:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.644 ************************************ 00:05:43.644 END TEST accel_decomp_mthread 00:05:43.644 ************************************ 00:05:43.644 19:06:20 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.644 19:06:20 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:05:43.644 19:06:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:43.644 19:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:43.644 ************************************ 00:05:43.644 START TEST accel_deomp_full_mthread 00:05:43.644 ************************************ 00:05:43.644 19:06:20 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.644 19:06:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.644 19:06:20 -- accel/accel.sh@17 -- # local accel_module 00:05:43.644 19:06:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.644 19:06:20 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.CtpDMe -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:43.644 [2024-02-14 19:06:20.939958] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:43.644 [2024-02-14 19:06:20.940147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:44.581 EAL: TSC is not safe to use in SMP mode 00:05:44.581 EAL: TSC is not invariant 00:05:44.581 [2024-02-14 19:06:21.685751] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.581 [2024-02-14 19:06:21.796993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.581 [2024-02-14 19:06:21.797055] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:44.581 19:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.581 19:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.581 19:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.581 19:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.581 19:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.581 19:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.581 19:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.581 19:06:21 -- accel/accel.sh@42 -- # jq -r . 00:05:45.566 [2024-02-14 19:06:22.839951] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:45.825 19:06:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:45.825 00:05:45.825 SPDK Configuration: 00:05:45.825 Core mask: 0x1 00:05:45.825 00:05:45.825 Accel Perf Configuration: 00:05:45.825 Workload Type: decompress 00:05:45.825 Transfer size: 111250 bytes 00:05:45.825 Vector count 1 00:05:45.825 Module: software 00:05:45.825 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:45.825 Queue depth: 32 00:05:45.825 Allocate depth: 32 00:05:45.825 # threads/core: 2 00:05:45.825 Run time: 1 seconds 00:05:45.825 Verify: Yes 00:05:45.825 00:05:45.825 Running for 1 seconds... 00:05:45.825 00:05:45.825 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.825 ------------------------------------------------------------------------------------ 00:05:45.825 0,1 2496/s 103 MiB/s 0 0 00:05:45.825 0,0 2464/s 101 MiB/s 0 0 00:05:45.825 ==================================================================================== 00:05:45.825 Total 4960/s 526 MiB/s 0 0' 00:05:45.825 19:06:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:45.825 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:45.825 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:45.825 19:06:23 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bXNczi -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:45.825 [2024-02-14 19:06:23.045733] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:45.825 [2024-02-14 19:06:23.045935] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:46.392 EAL: TSC is not safe to use in SMP mode 00:05:46.392 EAL: TSC is not invariant 00:05:46.651 [2024-02-14 19:06:23.809487] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.651 [2024-02-14 19:06:23.921889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.651 [2024-02-14 19:06:23.921950] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:46.651 19:06:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.651 19:06:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.651 19:06:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.651 19:06:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.651 19:06:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.651 19:06:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.651 19:06:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.651 19:06:23 -- accel/accel.sh@42 -- # jq -r . 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=0x1 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=decompress 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=software 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=32 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=32 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=2 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val=Yes 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.651 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.651 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.651 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.652 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:46.652 19:06:23 -- accel/accel.sh@21 -- # val= 00:05:46.652 19:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.652 19:06:23 -- accel/accel.sh@20 -- # IFS=: 00:05:46.652 19:06:23 -- accel/accel.sh@20 -- # read -r var val 00:05:47.587 [2024-02-14 19:06:24.967492] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@21 -- # val= 00:05:47.847 19:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # IFS=: 00:05:47.847 19:06:25 -- accel/accel.sh@20 -- # read -r var val 00:05:47.847 19:06:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:47.847 19:06:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:47.847 19:06:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.847 00:05:47.847 real 0m4.240s 00:05:47.847 user 0m2.636s 00:05:47.847 sys 0m1.623s 00:05:47.847 19:06:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.847 19:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:47.847 ************************************ 00:05:47.847 END TEST accel_deomp_full_mthread 00:05:47.847 ************************************ 00:05:47.847 19:06:25 -- accel/accel.sh@116 -- # [[ n == y ]] 00:05:47.847 19:06:25 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.Woi19M 00:05:47.847 19:06:25 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:05:47.847 19:06:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:47.847 19:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:47.847 ************************************ 00:05:47.847 START TEST accel_dif_functional_tests 00:05:47.847 ************************************ 00:05:47.847 19:06:25 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.Woi19M 00:05:47.847 [2024-02-14 19:06:25.221857] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:47.847 [2024-02-14 19:06:25.222195] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:48.784 EAL: TSC is not safe to use in SMP mode 00:05:48.784 EAL: TSC is not invariant 00:05:48.784 [2024-02-14 19:06:25.965026] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.784 [2024-02-14 19:06:26.096066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.784 [2024-02-14 19:06:26.096159] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:48.784 [2024-02-14 19:06:26.095887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.784 [2024-02-14 19:06:26.096061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.784 19:06:26 -- accel/accel.sh@129 -- # build_accel_config 00:05:48.784 19:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.784 19:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.784 19:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.784 19:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.784 19:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.784 19:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.784 19:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:48.784 00:05:48.784 00:05:48.784 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.784 http://cunit.sourceforge.net/ 00:05:48.784 00:05:48.784 00:05:48.784 Suite: accel_dif 00:05:48.784 Test: verify: DIF generated, GUARD check ...passed 00:05:48.784 Test: verify: DIF generated, APPTAG check ...passed 00:05:48.784 Test: verify: DIF generated, REFTAG check ...passed 00:05:48.784 Test: verify: DIF not generated, GUARD check ...passed 00:05:48.784 Test: verify: DIF not generated, APPTAG check ...passed 00:05:48.784 Test: verify: DIF not generated, REFTAG check ...passed 00:05:48.784 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:48.784 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:48.784 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:48.784 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:48.784 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:48.784 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-02-14 19:06:26.129515] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:48.784 [2024-02-14 19:06:26.129611] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:48.784 [2024-02-14 19:06:26.129658] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:48.784 [2024-02-14 19:06:26.129684] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:48.784 [2024-02-14 19:06:26.129704] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:48.784 [2024-02-14 19:06:26.129729] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:48.784 [2024-02-14 19:06:26.129768] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:48.784 passed 00:05:48.784 Test: generate copy: DIF generated, GUARD check ...passed 00:05:48.784 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:48.784 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:48.784 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:48.784 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:48.784 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:48.784 Test: generate copy: iovecs-len validate ...passed 00:05:48.784 Test: generate copy: buffer alignment validate ...[2024-02-14 19:06:26.129874] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:48.784 [2024-02-14 19:06:26.130074] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:48.784 passed 00:05:48.784 00:05:48.784 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.784 suites 1 1 n/a 0 0 00:05:48.784 tests 20 20 20 0 0 00:05:48.784 asserts 204 204 204 0 n/a 00:05:48.784 00:05:48.784 Elapsed time = 0.000 seconds 00:05:48.784 [2024-02-14 19:06:26.131164] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:49.042 00:05:49.042 real 0m1.155s 00:05:49.042 user 0m0.593s 00:05:49.042 sys 0m0.809s 00:05:49.042 ************************************ 00:05:49.042 END TEST accel_dif_functional_tests 00:05:49.042 ************************************ 00:05:49.042 19:06:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.042 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.042 00:05:49.042 real 1m30.782s 00:05:49.042 user 1m7.652s 00:05:49.042 sys 0m37.140s 00:05:49.042 19:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.042 19:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.042 19:06:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.042 19:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.042 ************************************ 00:05:49.042 END TEST accel 00:05:49.042 ************************************ 00:05:49.042 19:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.042 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.042 19:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.042 19:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.042 19:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.042 19:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.042 19:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.042 19:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.042 19:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.042 19:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.042 19:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.042 19:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.042 19:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.043 19:06:26 -- accel/accel.sh@42 -- # jq -r . 00:05:49.043 19:06:26 -- spdk/autotest.sh@190 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:49.043 19:06:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:49.043 19:06:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:49.043 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.043 ************************************ 00:05:49.043 START TEST accel_rpc 00:05:49.043 ************************************ 00:05:49.043 19:06:26 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:49.301 * Looking for test storage... 00:05:49.301 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:05:49.301 19:06:26 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.301 19:06:26 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47932 00:05:49.301 19:06:26 -- accel/accel_rpc.sh@15 -- # waitforlisten 47932 00:05:49.301 19:06:26 -- common/autotest_common.sh@817 -- # '[' -z 47932 ']' 00:05:49.301 19:06:26 -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:49.301 19:06:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.301 19:06:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.301 19:06:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.301 19:06:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.301 19:06:26 -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 [2024-02-14 19:06:26.626148] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:49.301 [2024-02-14 19:06:26.626344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:50.235 EAL: TSC is not safe to use in SMP mode 00:05:50.235 EAL: TSC is not invariant 00:05:50.235 [2024-02-14 19:06:27.388938] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.235 [2024-02-14 19:06:27.519596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.235 [2024-02-14 19:06:27.519744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.235 19:06:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.235 19:06:27 -- common/autotest_common.sh@850 -- # return 0 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:50.235 19:06:27 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:50.235 19:06:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:50.235 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.235 ************************************ 00:05:50.235 START TEST accel_assign_opcode 00:05:50.235 ************************************ 00:05:50.235 19:06:27 -- common/autotest_common.sh@1102 -- # accel_assign_opcode_test_suite 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:50.235 19:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.235 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.235 [2024-02-14 19:06:27.580165] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:50.235 19:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:50.235 19:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.235 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.235 [2024-02-14 19:06:27.588156] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:50.235 19:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:50.235 19:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.235 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.235 19:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:50.235 19:06:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.235 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:50.235 19:06:27 -- accel/accel_rpc.sh@42 -- # grep software 00:05:50.494 19:06:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.494 software 00:05:50.494 00:05:50.494 real 0m0.083s 00:05:50.494 user 0m0.008s 00:05:50.494 sys 0m0.013s 00:05:50.494 19:06:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.494 ************************************ 00:05:50.494 END TEST accel_assign_opcode 00:05:50.494 ************************************ 00:05:50.494 19:06:27 -- common/autotest_common.sh@10 -- # set +x 00:05:50.494 19:06:27 -- accel/accel_rpc.sh@55 -- # killprocess 47932 00:05:50.494 19:06:27 -- common/autotest_common.sh@924 -- # '[' -z 47932 ']' 00:05:50.494 19:06:27 -- common/autotest_common.sh@928 -- # kill -0 47932 00:05:50.494 19:06:27 -- common/autotest_common.sh@929 -- # uname 00:05:50.494 19:06:27 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:05:50.494 19:06:27 -- common/autotest_common.sh@932 -- # tail -1 00:05:50.494 19:06:27 -- common/autotest_common.sh@932 -- # ps -c -o command 47932 00:05:50.494 19:06:27 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:05:50.494 19:06:27 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:05:50.494 killing process with pid 47932 00:05:50.494 19:06:27 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 47932' 00:05:50.494 19:06:27 -- common/autotest_common.sh@943 -- # kill 47932 00:05:50.494 19:06:27 -- common/autotest_common.sh@948 -- # wait 47932 00:05:50.753 00:05:50.753 real 0m1.624s 00:05:50.753 user 0m1.167s 00:05:50.753 sys 0m1.013s 00:05:50.753 ************************************ 00:05:50.753 END TEST accel_rpc 00:05:50.753 ************************************ 00:05:50.753 19:06:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.753 19:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:50.753 19:06:28 -- spdk/autotest.sh@191 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:50.753 19:06:28 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:50.753 19:06:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:50.753 19:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:50.753 ************************************ 00:05:50.753 START TEST app_cmdline 00:05:50.753 ************************************ 00:05:50.753 19:06:28 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:51.012 * Looking for test storage... 00:05:51.012 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:05:51.012 19:06:28 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:51.012 19:06:28 -- app/cmdline.sh@17 -- # spdk_tgt_pid=48005 00:05:51.012 19:06:28 -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:51.012 19:06:28 -- app/cmdline.sh@18 -- # waitforlisten 48005 00:05:51.012 19:06:28 -- common/autotest_common.sh@817 -- # '[' -z 48005 ']' 00:05:51.012 19:06:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.012 19:06:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.012 19:06:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.012 19:06:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.012 19:06:28 -- common/autotest_common.sh@10 -- # set +x 00:05:51.013 [2024-02-14 19:06:28.282633] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:51.013 [2024-02-14 19:06:28.282887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:51.949 EAL: TSC is not safe to use in SMP mode 00:05:51.949 EAL: TSC is not invariant 00:05:51.949 [2024-02-14 19:06:29.031511] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.949 [2024-02-14 19:06:29.143295] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.949 [2024-02-14 19:06:29.143426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.949 19:06:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.949 19:06:29 -- common/autotest_common.sh@850 -- # return 0 00:05:51.949 19:06:29 -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:52.209 { 00:05:52.209 "version": "SPDK v24.05-pre git sha1 aa824ae66", 00:05:52.209 "fields": { 00:05:52.209 "major": 24, 00:05:52.209 "minor": 5, 00:05:52.209 "patch": 0, 00:05:52.209 "suffix": "-pre", 00:05:52.209 "commit": "aa824ae66" 00:05:52.209 } 00:05:52.209 } 00:05:52.209 19:06:29 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:52.209 19:06:29 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:52.209 19:06:29 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:52.209 19:06:29 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:52.209 19:06:29 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:52.209 19:06:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.209 19:06:29 -- common/autotest_common.sh@10 -- # set +x 00:05:52.209 19:06:29 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:52.209 19:06:29 -- app/cmdline.sh@26 -- # sort 00:05:52.209 19:06:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.209 19:06:29 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:52.209 19:06:29 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:52.209 19:06:29 -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.209 19:06:29 -- common/autotest_common.sh@638 -- # local es=0 00:05:52.209 19:06:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.209 19:06:29 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:52.209 19:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.209 19:06:29 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:52.209 19:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.209 19:06:29 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:52.209 19:06:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:52.209 19:06:29 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:52.209 19:06:29 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:52.209 19:06:29 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:52.468 request: 00:05:52.468 { 00:05:52.468 "method": "env_dpdk_get_mem_stats", 00:05:52.468 "req_id": 1 00:05:52.468 } 00:05:52.468 Got JSON-RPC error response 00:05:52.468 response: 00:05:52.468 { 00:05:52.468 "code": -32601, 00:05:52.468 "message": "Method not found" 00:05:52.468 } 00:05:52.468 19:06:29 -- common/autotest_common.sh@641 -- # es=1 00:05:52.468 19:06:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:52.468 19:06:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:52.468 19:06:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:52.468 19:06:29 -- app/cmdline.sh@1 -- # killprocess 48005 00:05:52.468 19:06:29 -- common/autotest_common.sh@924 -- # '[' -z 48005 ']' 00:05:52.468 19:06:29 -- common/autotest_common.sh@928 -- # kill -0 48005 00:05:52.468 19:06:29 -- common/autotest_common.sh@929 -- # uname 00:05:52.468 19:06:29 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:05:52.468 19:06:29 -- common/autotest_common.sh@932 -- # ps -c -o command 48005 00:05:52.468 19:06:29 -- common/autotest_common.sh@932 -- # tail -1 00:05:52.469 19:06:29 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:05:52.469 killing process with pid 48005 00:05:52.469 19:06:29 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:05:52.469 19:06:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48005' 00:05:52.469 19:06:29 -- common/autotest_common.sh@943 -- # kill 48005 00:05:52.469 19:06:29 -- common/autotest_common.sh@948 -- # wait 48005 00:05:52.753 00:05:52.753 real 0m2.023s 00:05:52.753 user 0m2.020s 00:05:52.753 sys 0m1.030s 00:05:52.753 ************************************ 00:05:52.753 END TEST app_cmdline 00:05:52.753 ************************************ 00:05:52.753 19:06:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.753 19:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.027 19:06:30 -- spdk/autotest.sh@192 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:53.027 19:06:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:53.027 19:06:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:53.027 19:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.027 ************************************ 00:05:53.027 START TEST version 00:05:53.027 ************************************ 00:05:53.027 19:06:30 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:53.027 * Looking for test storage... 00:05:53.027 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:05:53.027 19:06:30 -- app/version.sh@17 -- # get_header_version major 00:05:53.027 19:06:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.027 19:06:30 -- app/version.sh@14 -- # cut -f2 00:05:53.027 19:06:30 -- app/version.sh@14 -- # tr -d '"' 00:05:53.027 19:06:30 -- app/version.sh@17 -- # major=24 00:05:53.027 19:06:30 -- app/version.sh@18 -- # get_header_version minor 00:05:53.027 19:06:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.027 19:06:30 -- app/version.sh@14 -- # cut -f2 00:05:53.027 19:06:30 -- app/version.sh@14 -- # tr -d '"' 00:05:53.027 19:06:30 -- app/version.sh@18 -- # minor=5 00:05:53.027 19:06:30 -- app/version.sh@19 -- # get_header_version patch 00:05:53.027 19:06:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.027 19:06:30 -- app/version.sh@14 -- # cut -f2 00:05:53.027 19:06:30 -- app/version.sh@14 -- # tr -d '"' 00:05:53.027 19:06:30 -- app/version.sh@19 -- # patch=0 00:05:53.027 19:06:30 -- app/version.sh@20 -- # get_header_version suffix 00:05:53.027 19:06:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:53.027 19:06:30 -- app/version.sh@14 -- # cut -f2 00:05:53.027 19:06:30 -- app/version.sh@14 -- # tr -d '"' 00:05:53.027 19:06:30 -- app/version.sh@20 -- # suffix=-pre 00:05:53.027 19:06:30 -- app/version.sh@22 -- # version=24.5 00:05:53.027 19:06:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:53.027 19:06:30 -- app/version.sh@28 -- # version=24.5rc0 00:05:53.027 19:06:30 -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:05:53.027 19:06:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:53.027 19:06:30 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:53.027 19:06:30 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:53.027 00:05:53.027 real 0m0.226s 00:05:53.027 user 0m0.175s 00:05:53.027 sys 0m0.153s 00:05:53.027 19:06:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.027 19:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.027 ************************************ 00:05:53.027 END TEST version 00:05:53.027 ************************************ 00:05:53.027 19:06:30 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:05:53.027 19:06:30 -- spdk/autotest.sh@195 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:53.027 19:06:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:53.027 19:06:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:53.027 19:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.027 ************************************ 00:05:53.027 START TEST blockdev_general 00:05:53.027 ************************************ 00:05:53.027 19:06:30 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:53.286 * Looking for test storage... 00:05:53.286 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:05:53.286 19:06:30 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.286 19:06:30 -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.286 19:06:30 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:53.286 19:06:30 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:53.286 19:06:30 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:53.286 19:06:30 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:53.286 19:06:30 -- bdev/blockdev.sh@18 -- # : 00:05:53.286 19:06:30 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:05:53.286 19:06:30 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:05:53.286 19:06:30 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:05:53.286 19:06:30 -- bdev/blockdev.sh@672 -- # uname -s 00:05:53.287 19:06:30 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:05:53.287 19:06:30 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:05:53.287 19:06:30 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:05:53.287 19:06:30 -- bdev/blockdev.sh@681 -- # crypto_device= 00:05:53.287 19:06:30 -- bdev/blockdev.sh@682 -- # dek= 00:05:53.287 19:06:30 -- bdev/blockdev.sh@683 -- # env_ctx= 00:05:53.287 19:06:30 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:05:53.287 19:06:30 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:05:53.287 19:06:30 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:05:53.287 19:06:30 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:05:53.287 19:06:30 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:05:53.287 19:06:30 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=48130 00:05:53.287 19:06:30 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:53.287 19:06:30 -- bdev/blockdev.sh@47 -- # waitforlisten 48130 00:05:53.287 19:06:30 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:53.287 19:06:30 -- common/autotest_common.sh@817 -- # '[' -z 48130 ']' 00:05:53.287 19:06:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.287 19:06:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:53.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.287 19:06:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.287 19:06:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:53.287 19:06:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.287 [2024-02-14 19:06:30.627170] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:53.287 [2024-02-14 19:06:30.627470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:54.223 EAL: TSC is not safe to use in SMP mode 00:05:54.223 EAL: TSC is not invariant 00:05:54.223 [2024-02-14 19:06:31.390115] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.223 [2024-02-14 19:06:31.518250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.223 [2024-02-14 19:06:31.518385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.482 19:06:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:54.482 19:06:31 -- common/autotest_common.sh@850 -- # return 0 00:05:54.482 19:06:31 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:05:54.482 19:06:31 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:05:54.482 19:06:31 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:05:54.482 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.482 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.482 [2024-02-14 19:06:31.706075] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:54.482 [2024-02-14 19:06:31.706130] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:54.482 00:05:54.482 [2024-02-14 19:06:31.714052] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:54.482 [2024-02-14 19:06:31.714076] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:54.482 00:05:54.482 Malloc0 00:05:54.482 Malloc1 00:05:54.482 Malloc2 00:05:54.482 Malloc3 00:05:54.482 Malloc4 00:05:54.482 Malloc5 00:05:54.482 Malloc6 00:05:54.483 Malloc7 00:05:54.483 Malloc8 00:05:54.483 Malloc9 00:05:54.483 [2024-02-14 19:06:31.802056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.483 [2024-02-14 19:06:31.802091] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.483 [2024-02-14 19:06:31.802124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82be95700 00:05:54.483 [2024-02-14 19:06:31.802131] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.483 [2024-02-14 19:06:31.802421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.483 [2024-02-14 19:06:31.802441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:54.483 TestPT 00:05:54.483 19:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.483 19:06:31 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:54.483 5000+0 records in 00:05:54.483 5000+0 records out 00:05:54.483 10240000 bytes transferred in 0.023228 secs (440840250 bytes/sec) 00:05:54.483 19:06:31 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:54.483 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.483 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.483 AIO0 00:05:54.483 19:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.483 19:06:31 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:05:54.483 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.483 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.483 19:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.483 19:06:31 -- bdev/blockdev.sh@738 -- # cat 00:05:54.483 19:06:31 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:05:54.483 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.483 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.744 19:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.744 19:06:31 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:05:54.744 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.744 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.744 19:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.744 19:06:31 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:54.744 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.744 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.744 19:06:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.744 19:06:31 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:05:54.744 19:06:31 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:05:54.744 19:06:31 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:05:54.744 19:06:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:54.744 19:06:31 -- common/autotest_common.sh@10 -- # set +x 00:05:54.744 19:06:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:54.744 19:06:32 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:05:54.744 19:06:32 -- bdev/blockdev.sh@747 -- # jq -r .name 00:05:54.745 19:06:32 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2a0943a5-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2a0943a5-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f3539a71-288f-1553-a6db-2496fbb846b4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f3539a71-288f-1553-a6db-2496fbb846b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e135c30d-9858-fe58-b3b1-ef670f966f44"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e135c30d-9858-fe58-b3b1-ef670f966f44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "4616951a-9741-905c-8b97-7f68f3c4fecf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4616951a-9741-905c-8b97-7f68f3c4fecf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "3fd87ae9-0721-5452-ad8c-d31707088bf1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3fd87ae9-0721-5452-ad8c-d31707088bf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3ae1a254-0e8f-7755-bbcf-3a5f2d3a6d34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3ae1a254-0e8f-7755-bbcf-3a5f2d3a6d34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "19d8abea-4b82-0b54-ad63-9d567afd8802"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19d8abea-4b82-0b54-ad63-9d567afd8802",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "4b2834fb-7115-f052-aa26-24724906f6bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4b2834fb-7115-f052-aa26-24724906f6bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "736d1923-46df-5856-8cb8-51dea0c4c5ff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "736d1923-46df-5856-8cb8-51dea0c4c5ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "10e79759-a765-5350-b0b3-1294a0a48548"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10e79759-a765-5350-b0b3-1294a0a48548",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a648cb98-ffe1-d157-a760-2f20218ef32c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a648cb98-ffe1-d157-a760-2f20218ef32c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "8f40a498-7932-2c57-bad4-7b0ca0321750"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8f40a498-7932-2c57-bad4-7b0ca0321750",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2a0e24d4-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2a0f5d4e-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "2a17e977-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a17e977-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a17e977-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "2a1095c7-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "2a11ce4d-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2a19219b-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2a19219b-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a19219b-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2a1306cb-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "2a143f4c-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2a21124a-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2a21124a-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:54.745 19:06:32 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:05:54.745 19:06:32 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:05:54.745 19:06:32 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:05:54.745 19:06:32 -- bdev/blockdev.sh@752 -- # killprocess 48130 00:05:54.745 19:06:32 -- common/autotest_common.sh@924 -- # '[' -z 48130 ']' 00:05:54.745 19:06:32 -- common/autotest_common.sh@928 -- # kill -0 48130 00:05:54.745 19:06:32 -- common/autotest_common.sh@929 -- # uname 00:05:54.745 19:06:32 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:05:54.745 19:06:32 -- common/autotest_common.sh@932 -- # ps -c -o command 48130 00:05:54.745 19:06:32 -- common/autotest_common.sh@932 -- # tail -1 00:05:54.745 19:06:32 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:05:54.745 killing process with pid 48130 00:05:54.745 19:06:32 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:05:54.745 19:06:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48130' 00:05:54.745 19:06:32 -- common/autotest_common.sh@943 -- # kill 48130 00:05:54.745 19:06:32 -- common/autotest_common.sh@948 -- # wait 48130 00:05:55.313 19:06:32 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:55.313 19:06:32 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:55.313 19:06:32 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:05:55.313 19:06:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:55.313 19:06:32 -- common/autotest_common.sh@10 -- # set +x 00:05:55.313 ************************************ 00:05:55.313 START TEST bdev_hello_world 00:05:55.313 ************************************ 00:05:55.313 19:06:32 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:55.313 [2024-02-14 19:06:32.592826] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:55.313 [2024-02-14 19:06:32.593005] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:56.251 EAL: TSC is not safe to use in SMP mode 00:05:56.251 EAL: TSC is not invariant 00:05:56.251 [2024-02-14 19:06:33.321152] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.251 [2024-02-14 19:06:33.428510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.251 [2024-02-14 19:06:33.428575] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:56.251 [2024-02-14 19:06:33.487594] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:56.251 [2024-02-14 19:06:33.487625] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:56.251 [2024-02-14 19:06:33.495578] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:56.251 [2024-02-14 19:06:33.495598] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:56.251 [2024-02-14 19:06:33.503591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:56.251 [2024-02-14 19:06:33.503611] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:56.251 [2024-02-14 19:06:33.503618] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:56.251 [2024-02-14 19:06:33.551594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:56.251 [2024-02-14 19:06:33.551626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.251 [2024-02-14 19:06:33.551639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b93a800 00:05:56.251 [2024-02-14 19:06:33.551646] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.251 [2024-02-14 19:06:33.551934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.251 [2024-02-14 19:06:33.551946] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:56.251 [2024-02-14 19:06:33.653422] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:56.251 [2024-02-14 19:06:33.653449] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:56.251 [2024-02-14 19:06:33.653476] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:56.251 [2024-02-14 19:06:33.653488] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:56.251 [2024-02-14 19:06:33.653500] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:56.251 [2024-02-14 19:06:33.653507] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:56.251 [2024-02-14 19:06:33.653516] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:56.251 00:05:56.251 [2024-02-14 19:06:33.653523] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:56.251 [2024-02-14 19:06:33.653532] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:56.819 00:05:56.819 real 0m1.388s 00:05:56.819 user 0m0.600s 00:05:56.819 sys 0m0.787s 00:05:56.819 19:06:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.819 19:06:33 -- common/autotest_common.sh@10 -- # set +x 00:05:56.819 ************************************ 00:05:56.819 END TEST bdev_hello_world 00:05:56.819 ************************************ 00:05:56.819 19:06:34 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:05:56.819 19:06:34 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:05:56.819 19:06:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:56.819 19:06:34 -- common/autotest_common.sh@10 -- # set +x 00:05:56.819 ************************************ 00:05:56.819 START TEST bdev_bounds 00:05:56.819 ************************************ 00:05:56.819 19:06:34 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:05:56.819 19:06:34 -- bdev/blockdev.sh@288 -- # bdevio_pid=48170 00:05:56.819 19:06:34 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.820 Process bdevio pid: 48170 00:05:56.820 19:06:34 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 48170' 00:05:56.820 19:06:34 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:56.820 19:06:34 -- bdev/blockdev.sh@291 -- # waitforlisten 48170 00:05:56.820 19:06:34 -- common/autotest_common.sh@817 -- # '[' -z 48170 ']' 00:05:56.820 19:06:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.820 19:06:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.820 19:06:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.820 19:06:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.820 19:06:34 -- common/autotest_common.sh@10 -- # set +x 00:05:56.820 [2024-02-14 19:06:34.035301] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:05:56.820 [2024-02-14 19:06:34.035475] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:57.388 EAL: TSC is not safe to use in SMP mode 00:05:57.388 EAL: TSC is not invariant 00:05:57.388 [2024-02-14 19:06:34.757831] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.645 [2024-02-14 19:06:34.869001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.645 [2024-02-14 19:06:34.868856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.645 [2024-02-14 19:06:34.869085] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:05:57.645 [2024-02-14 19:06:34.869002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.645 [2024-02-14 19:06:34.928781] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:57.645 [2024-02-14 19:06:34.928838] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:57.645 [2024-02-14 19:06:34.936766] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:57.645 [2024-02-14 19:06:34.936798] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:57.645 [2024-02-14 19:06:34.944781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:57.645 [2024-02-14 19:06:34.944809] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:57.645 [2024-02-14 19:06:34.944821] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:57.645 [2024-02-14 19:06:34.992781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:57.645 [2024-02-14 19:06:34.992842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.645 [2024-02-14 19:06:34.992859] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82ea0f800 00:05:57.645 [2024-02-14 19:06:34.992868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.645 [2024-02-14 19:06:34.993285] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.646 [2024-02-14 19:06:34.993326] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:58.580 19:06:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.580 19:06:35 -- common/autotest_common.sh@850 -- # return 0 00:05:58.580 19:06:35 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:58.580 I/O targets: 00:05:58.580 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:58.580 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:58.580 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:58.580 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:58.580 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:58.580 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:58.580 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:58.580 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:58.580 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:58.580 00:05:58.580 00:05:58.580 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.580 http://cunit.sourceforge.net/ 00:05:58.580 00:05:58.580 00:05:58.580 Suite: bdevio tests on: AIO0 00:05:58.580 Test: blockdev write read block ...passed 00:05:58.580 Test: blockdev write zeroes read block ...passed 00:05:58.580 Test: blockdev write zeroes read no split ...passed 00:05:58.580 Test: blockdev write zeroes read split ...passed 00:05:58.580 Test: blockdev write zeroes read split partial ...passed 00:05:58.580 Test: blockdev reset ...passed 00:05:58.580 Test: blockdev write read 8 blocks ...passed 00:05:58.580 Test: blockdev write read size > 128k ...passed 00:05:58.580 Test: blockdev write read invalid size ...passed 00:05:58.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.580 Test: blockdev write read max offset ...passed 00:05:58.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.580 Test: blockdev writev readv 8 blocks ...passed 00:05:58.580 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.580 Test: blockdev writev readv block ...passed 00:05:58.580 Test: blockdev writev readv size > 128k ...passed 00:05:58.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.580 Test: blockdev comparev and writev ...passed 00:05:58.580 Test: blockdev nvme passthru rw ...passed 00:05:58.580 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.580 Test: blockdev nvme admin passthru ...passed 00:05:58.580 Test: blockdev copy ...passed 00:05:58.580 Suite: bdevio tests on: raid1 00:05:58.580 Test: blockdev write read block ...passed 00:05:58.580 Test: blockdev write zeroes read block ...passed 00:05:58.580 Test: blockdev write zeroes read no split ...passed 00:05:58.580 Test: blockdev write zeroes read split ...passed 00:05:58.580 Test: blockdev write zeroes read split partial ...passed 00:05:58.580 Test: blockdev reset ...passed 00:05:58.580 Test: blockdev write read 8 blocks ...passed 00:05:58.580 Test: blockdev write read size > 128k ...passed 00:05:58.580 Test: blockdev write read invalid size ...passed 00:05:58.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.580 Test: blockdev write read max offset ...passed 00:05:58.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.580 Test: blockdev writev readv 8 blocks ...passed 00:05:58.580 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.580 Test: blockdev writev readv block ...passed 00:05:58.580 Test: blockdev writev readv size > 128k ...passed 00:05:58.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.580 Test: blockdev comparev and writev ...passed 00:05:58.580 Test: blockdev nvme passthru rw ...passed 00:05:58.580 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.580 Test: blockdev nvme admin passthru ...passed 00:05:58.580 Test: blockdev copy ...passed 00:05:58.580 Suite: bdevio tests on: concat0 00:05:58.580 Test: blockdev write read block ...passed 00:05:58.580 Test: blockdev write zeroes read block ...passed 00:05:58.580 Test: blockdev write zeroes read no split ...passed 00:05:58.580 Test: blockdev write zeroes read split ...passed 00:05:58.580 Test: blockdev write zeroes read split partial ...passed 00:05:58.580 Test: blockdev reset ...passed 00:05:58.580 Test: blockdev write read 8 blocks ...passed 00:05:58.580 Test: blockdev write read size > 128k ...passed 00:05:58.580 Test: blockdev write read invalid size ...passed 00:05:58.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.580 Test: blockdev write read max offset ...passed 00:05:58.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.580 Test: blockdev writev readv 8 blocks ...passed 00:05:58.580 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.580 Test: blockdev writev readv block ...passed 00:05:58.580 Test: blockdev writev readv size > 128k ...passed 00:05:58.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.580 Test: blockdev comparev and writev ...passed 00:05:58.580 Test: blockdev nvme passthru rw ...passed 00:05:58.580 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.580 Test: blockdev nvme admin passthru ...passed 00:05:58.580 Test: blockdev copy ...passed 00:05:58.580 Suite: bdevio tests on: raid0 00:05:58.580 Test: blockdev write read block ...passed 00:05:58.580 Test: blockdev write zeroes read block ...passed 00:05:58.580 Test: blockdev write zeroes read no split ...passed 00:05:58.580 Test: blockdev write zeroes read split ...passed 00:05:58.580 Test: blockdev write zeroes read split partial ...passed 00:05:58.580 Test: blockdev reset ...passed 00:05:58.580 Test: blockdev write read 8 blocks ...passed 00:05:58.580 Test: blockdev write read size > 128k ...passed 00:05:58.580 Test: blockdev write read invalid size ...passed 00:05:58.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.580 Test: blockdev write read max offset ...passed 00:05:58.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.580 Test: blockdev writev readv 8 blocks ...passed 00:05:58.580 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.580 Test: blockdev writev readv block ...passed 00:05:58.580 Test: blockdev writev readv size > 128k ...passed 00:05:58.580 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.580 Test: blockdev comparev and writev ...passed 00:05:58.580 Test: blockdev nvme passthru rw ...passed 00:05:58.580 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.580 Test: blockdev nvme admin passthru ...passed 00:05:58.580 Test: blockdev copy ...passed 00:05:58.580 Suite: bdevio tests on: TestPT 00:05:58.580 Test: blockdev write read block ...passed 00:05:58.580 Test: blockdev write zeroes read block ...passed 00:05:58.580 Test: blockdev write zeroes read no split ...passed 00:05:58.580 Test: blockdev write zeroes read split ...passed 00:05:58.580 Test: blockdev write zeroes read split partial ...passed 00:05:58.580 Test: blockdev reset ...passed 00:05:58.580 Test: blockdev write read 8 blocks ...passed 00:05:58.580 Test: blockdev write read size > 128k ...passed 00:05:58.580 Test: blockdev write read invalid size ...passed 00:05:58.580 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.580 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.580 Test: blockdev write read max offset ...passed 00:05:58.580 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.580 Test: blockdev writev readv 8 blocks ...passed 00:05:58.580 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.581 Test: blockdev writev readv block ...passed 00:05:58.581 Test: blockdev writev readv size > 128k ...passed 00:05:58.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.581 Test: blockdev comparev and writev ...passed 00:05:58.581 Test: blockdev nvme passthru rw ...passed 00:05:58.581 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.581 Test: blockdev nvme admin passthru ...passed 00:05:58.581 Test: blockdev copy ...passed 00:05:58.581 Suite: bdevio tests on: Malloc2p7 00:05:58.581 Test: blockdev write read block ...passed 00:05:58.581 Test: blockdev write zeroes read block ...passed 00:05:58.581 Test: blockdev write zeroes read no split ...passed 00:05:58.581 Test: blockdev write zeroes read split ...passed 00:05:58.581 Test: blockdev write zeroes read split partial ...passed 00:05:58.581 Test: blockdev reset ...passed 00:05:58.581 Test: blockdev write read 8 blocks ...passed 00:05:58.581 Test: blockdev write read size > 128k ...passed 00:05:58.581 Test: blockdev write read invalid size ...passed 00:05:58.581 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.581 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.581 Test: blockdev write read max offset ...passed 00:05:58.581 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.581 Test: blockdev writev readv 8 blocks ...passed 00:05:58.581 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.581 Test: blockdev writev readv block ...passed 00:05:58.581 Test: blockdev writev readv size > 128k ...passed 00:05:58.581 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.581 Test: blockdev comparev and writev ...passed 00:05:58.581 Test: blockdev nvme passthru rw ...passed 00:05:58.581 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.581 Test: blockdev nvme admin passthru ...passed 00:05:58.581 Test: blockdev copy ...passed 00:05:58.581 Suite: bdevio tests on: Malloc2p6 00:05:58.581 Test: blockdev write read block ...passed 00:05:58.581 Test: blockdev write zeroes read block ...passed 00:05:58.581 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.840 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.840 Test: blockdev nvme admin passthru ...passed 00:05:58.840 Test: blockdev copy ...passed 00:05:58.840 Suite: bdevio tests on: Malloc2p5 00:05:58.840 Test: blockdev write read block ...passed 00:05:58.840 Test: blockdev write zeroes read block ...passed 00:05:58.840 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.840 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.840 Test: blockdev nvme admin passthru ...passed 00:05:58.840 Test: blockdev copy ...passed 00:05:58.840 Suite: bdevio tests on: Malloc2p4 00:05:58.840 Test: blockdev write read block ...passed 00:05:58.840 Test: blockdev write zeroes read block ...passed 00:05:58.840 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.840 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.840 Test: blockdev nvme admin passthru ...passed 00:05:58.840 Test: blockdev copy ...passed 00:05:58.840 Suite: bdevio tests on: Malloc2p3 00:05:58.840 Test: blockdev write read block ...passed 00:05:58.840 Test: blockdev write zeroes read block ...passed 00:05:58.840 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.840 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.840 Test: blockdev nvme admin passthru ...passed 00:05:58.840 Test: blockdev copy ...passed 00:05:58.840 Suite: bdevio tests on: Malloc2p2 00:05:58.840 Test: blockdev write read block ...passed 00:05:58.840 Test: blockdev write zeroes read block ...passed 00:05:58.840 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.840 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.840 Test: blockdev nvme admin passthru ...passed 00:05:58.840 Test: blockdev copy ...passed 00:05:58.840 Suite: bdevio tests on: Malloc2p1 00:05:58.840 Test: blockdev write read block ...passed 00:05:58.840 Test: blockdev write zeroes read block ...passed 00:05:58.840 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.840 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.840 Test: blockdev nvme admin passthru ...passed 00:05:58.840 Test: blockdev copy ...passed 00:05:58.840 Suite: bdevio tests on: Malloc2p0 00:05:58.840 Test: blockdev write read block ...passed 00:05:58.840 Test: blockdev write zeroes read block ...passed 00:05:58.840 Test: blockdev write zeroes read no split ...passed 00:05:58.840 Test: blockdev write zeroes read split ...passed 00:05:58.840 Test: blockdev write zeroes read split partial ...passed 00:05:58.840 Test: blockdev reset ...passed 00:05:58.840 Test: blockdev write read 8 blocks ...passed 00:05:58.840 Test: blockdev write read size > 128k ...passed 00:05:58.840 Test: blockdev write read invalid size ...passed 00:05:58.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.840 Test: blockdev write read max offset ...passed 00:05:58.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.840 Test: blockdev writev readv 8 blocks ...passed 00:05:58.840 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.840 Test: blockdev writev readv block ...passed 00:05:58.840 Test: blockdev writev readv size > 128k ...passed 00:05:58.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.840 Test: blockdev comparev and writev ...passed 00:05:58.840 Test: blockdev nvme passthru rw ...passed 00:05:58.841 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.841 Test: blockdev nvme admin passthru ...passed 00:05:58.841 Test: blockdev copy ...passed 00:05:58.841 Suite: bdevio tests on: Malloc1p1 00:05:58.841 Test: blockdev write read block ...passed 00:05:58.841 Test: blockdev write zeroes read block ...passed 00:05:58.841 Test: blockdev write zeroes read no split ...passed 00:05:58.841 Test: blockdev write zeroes read split ...passed 00:05:58.841 Test: blockdev write zeroes read split partial ...passed 00:05:58.841 Test: blockdev reset ...passed 00:05:58.841 Test: blockdev write read 8 blocks ...passed 00:05:58.841 Test: blockdev write read size > 128k ...passed 00:05:58.841 Test: blockdev write read invalid size ...passed 00:05:58.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.841 Test: blockdev write read max offset ...passed 00:05:58.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.841 Test: blockdev writev readv 8 blocks ...passed 00:05:58.841 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.841 Test: blockdev writev readv block ...passed 00:05:58.841 Test: blockdev writev readv size > 128k ...passed 00:05:58.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.841 Test: blockdev comparev and writev ...passed 00:05:58.841 Test: blockdev nvme passthru rw ...passed 00:05:58.841 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.841 Test: blockdev nvme admin passthru ...passed 00:05:58.841 Test: blockdev copy ...passed 00:05:58.841 Suite: bdevio tests on: Malloc1p0 00:05:58.841 Test: blockdev write read block ...passed 00:05:58.841 Test: blockdev write zeroes read block ...passed 00:05:58.841 Test: blockdev write zeroes read no split ...passed 00:05:58.841 Test: blockdev write zeroes read split ...passed 00:05:58.841 Test: blockdev write zeroes read split partial ...passed 00:05:58.841 Test: blockdev reset ...passed 00:05:58.841 Test: blockdev write read 8 blocks ...passed 00:05:58.841 Test: blockdev write read size > 128k ...passed 00:05:58.841 Test: blockdev write read invalid size ...passed 00:05:58.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.841 Test: blockdev write read max offset ...passed 00:05:58.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.841 Test: blockdev writev readv 8 blocks ...passed 00:05:58.841 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.841 Test: blockdev writev readv block ...passed 00:05:58.841 Test: blockdev writev readv size > 128k ...passed 00:05:58.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.841 Test: blockdev comparev and writev ...passed 00:05:58.841 Test: blockdev nvme passthru rw ...passed 00:05:58.841 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.841 Test: blockdev nvme admin passthru ...passed 00:05:58.841 Test: blockdev copy ...passed 00:05:58.841 Suite: bdevio tests on: Malloc0 00:05:58.841 Test: blockdev write read block ...passed 00:05:58.841 Test: blockdev write zeroes read block ...passed 00:05:58.841 Test: blockdev write zeroes read no split ...passed 00:05:58.841 Test: blockdev write zeroes read split ...passed 00:05:58.841 Test: blockdev write zeroes read split partial ...passed 00:05:58.841 Test: blockdev reset ...passed 00:05:58.841 Test: blockdev write read 8 blocks ...passed 00:05:58.841 Test: blockdev write read size > 128k ...passed 00:05:58.841 Test: blockdev write read invalid size ...passed 00:05:58.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:58.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:58.841 Test: blockdev write read max offset ...passed 00:05:58.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:58.841 Test: blockdev writev readv 8 blocks ...passed 00:05:58.841 Test: blockdev writev readv 30 x 1block ...passed 00:05:58.841 Test: blockdev writev readv block ...passed 00:05:58.841 Test: blockdev writev readv size > 128k ...passed 00:05:58.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:58.841 Test: blockdev comparev and writev ...passed 00:05:58.841 Test: blockdev nvme passthru rw ...passed 00:05:58.841 Test: blockdev nvme passthru vendor specific ...passed 00:05:58.841 Test: blockdev nvme admin passthru ...passed 00:05:58.841 Test: blockdev copy ...passed 00:05:58.841 00:05:58.841 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.841 suites 16 16 n/a 0 0 00:05:58.841 tests 368 368 368 0 0 00:05:58.841 asserts 2224 2224 2224 0 n/a 00:05:58.841 00:05:58.841 Elapsed time = 0.492 seconds 00:05:58.841 0 00:05:58.841 19:06:36 -- bdev/blockdev.sh@293 -- # killprocess 48170 00:05:58.841 19:06:36 -- common/autotest_common.sh@924 -- # '[' -z 48170 ']' 00:05:58.841 19:06:36 -- common/autotest_common.sh@928 -- # kill -0 48170 00:05:58.841 19:06:36 -- common/autotest_common.sh@929 -- # uname 00:05:58.841 19:06:36 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:05:58.841 19:06:36 -- common/autotest_common.sh@932 -- # ps -c -o command 48170 00:05:58.841 19:06:36 -- common/autotest_common.sh@932 -- # tail -1 00:05:58.841 19:06:36 -- common/autotest_common.sh@932 -- # process_name=bdevio 00:05:58.841 19:06:36 -- common/autotest_common.sh@934 -- # '[' bdevio = sudo ']' 00:05:58.841 killing process with pid 48170 00:05:58.841 19:06:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48170' 00:05:58.841 19:06:36 -- common/autotest_common.sh@943 -- # kill 48170 00:05:58.841 [2024-02-14 19:06:36.083570] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:05:58.841 19:06:36 -- common/autotest_common.sh@948 -- # wait 48170 00:05:59.100 19:06:36 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:05:59.100 00:05:59.100 real 0m2.385s 00:05:59.100 user 0m4.796s 00:05:59.100 sys 0m0.971s 00:05:59.100 19:06:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.100 19:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.100 ************************************ 00:05:59.100 END TEST bdev_bounds 00:05:59.100 ************************************ 00:05:59.100 19:06:36 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:59.100 19:06:36 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:05:59.100 19:06:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:59.100 19:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.100 ************************************ 00:05:59.100 START TEST bdev_nbd 00:05:59.100 ************************************ 00:05:59.100 19:06:36 -- common/autotest_common.sh@1102 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:59.100 19:06:36 -- bdev/blockdev.sh@298 -- # uname -s 00:05:59.100 19:06:36 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:05:59.100 19:06:36 -- bdev/blockdev.sh@298 -- # return 0 00:05:59.100 00:05:59.100 real 0m0.004s 00:05:59.100 user 0m0.002s 00:05:59.100 sys 0m0.002s 00:05:59.100 19:06:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.100 19:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.100 ************************************ 00:05:59.100 END TEST bdev_nbd 00:05:59.100 ************************************ 00:05:59.100 19:06:36 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:05:59.100 19:06:36 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:05:59.100 19:06:36 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:05:59.100 19:06:36 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:05:59.100 19:06:36 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:05:59.100 19:06:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:59.100 19:06:36 -- common/autotest_common.sh@10 -- # set +x 00:05:59.100 ************************************ 00:05:59.100 START TEST bdev_fio 00:05:59.100 ************************************ 00:05:59.100 19:06:36 -- common/autotest_common.sh@1102 -- # fio_test_suite '' 00:05:59.100 19:06:36 -- bdev/blockdev.sh@329 -- # local env_context 00:05:59.100 19:06:36 -- bdev/blockdev.sh@333 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:05:59.100 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:05:59.100 19:06:36 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:59.100 19:06:36 -- bdev/blockdev.sh@337 -- # echo '' 00:05:59.100 19:06:36 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:05:59.100 19:06:36 -- bdev/blockdev.sh@337 -- # env_context= 00:05:59.100 19:06:36 -- bdev/blockdev.sh@338 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:59.100 19:06:36 -- common/autotest_common.sh@1257 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:59.100 19:06:36 -- common/autotest_common.sh@1258 -- # local workload=verify 00:05:59.100 19:06:36 -- common/autotest_common.sh@1259 -- # local bdev_type=AIO 00:05:59.100 19:06:36 -- common/autotest_common.sh@1260 -- # local env_context= 00:05:59.100 19:06:36 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:05:59.100 19:06:36 -- common/autotest_common.sh@1263 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:59.101 19:06:36 -- common/autotest_common.sh@1268 -- # '[' -z verify ']' 00:05:59.101 19:06:36 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:05:59.101 19:06:36 -- common/autotest_common.sh@1276 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:59.101 19:06:36 -- common/autotest_common.sh@1278 -- # cat 00:05:59.101 19:06:36 -- common/autotest_common.sh@1290 -- # '[' verify == verify ']' 00:05:59.101 19:06:36 -- common/autotest_common.sh@1291 -- # cat 00:05:59.101 19:06:36 -- common/autotest_common.sh@1300 -- # '[' AIO == AIO ']' 00:05:59.101 19:06:36 -- common/autotest_common.sh@1301 -- # /usr/src/fio/fio --version 00:06:00.037 19:06:37 -- common/autotest_common.sh@1301 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:06:00.037 19:06:37 -- common/autotest_common.sh@1302 -- # echo serialize_overlap=1 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:06:00.037 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:06:00.037 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.037 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:06:00.038 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.038 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:06:00.038 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.038 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:06:00.038 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.038 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:06:00.038 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.038 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:06:00.038 19:06:37 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:06:00.038 19:06:37 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:06:00.038 19:06:37 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:06:00.038 19:06:37 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:00.038 19:06:37 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:00.038 19:06:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:00.038 19:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:00.038 ************************************ 00:06:00.038 START TEST bdev_fio_rw_verify 00:06:00.038 ************************************ 00:06:00.038 19:06:37 -- common/autotest_common.sh@1102 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:00.038 19:06:37 -- common/autotest_common.sh@1333 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:00.038 19:06:37 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:06:00.038 19:06:37 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:06:00.038 19:06:37 -- common/autotest_common.sh@1316 -- # local sanitizers 00:06:00.038 19:06:37 -- common/autotest_common.sh@1317 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:00.038 19:06:37 -- common/autotest_common.sh@1318 -- # shift 00:06:00.038 19:06:37 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:06:00.038 19:06:37 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # grep libasan 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # asan_lib= 00:06:00.038 19:06:37 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:06:00.038 19:06:37 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:06:00.038 19:06:37 -- common/autotest_common.sh@1322 -- # asan_lib= 00:06:00.038 19:06:37 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:06:00.038 19:06:37 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:06:00.038 19:06:37 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:00.038 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:00.038 fio-3.35 00:06:00.297 Starting 16 threads 00:06:00.874 EAL: TSC is not safe to use in SMP mode 00:06:00.874 EAL: TSC is not invariant 00:06:13.083 00:06:13.083 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102696: Wed Feb 14 19:06:48 2024 00:06:13.083 read: IOPS=261k, BW=1018MiB/s (1068MB/s)(9.94GiB/10002msec) 00:06:13.083 slat (nsec): min=217, max=535480k, avg=3323.89, stdev=494139.23 00:06:13.083 clat (nsec): min=579, max=535509k, avg=43728.37, stdev=1493456.36 00:06:13.083 lat (nsec): min=1549, max=535510k, avg=47052.26, stdev=1573097.72 00:06:13.083 clat percentiles (usec): 00:06:13.083 | 50.000th=[ 8], 99.000th=[ 775], 99.900th=[ 857], 00:06:13.083 | 99.990th=[ 87557], 99.999th=[152044] 00:06:13.083 write: IOPS=444k, BW=1735MiB/s (1820MB/s)(17.0GiB/10002msec); 0 zone resets 00:06:13.083 slat (nsec): min=433, max=670348k, avg=18208.66, stdev=856347.24 00:06:13.083 clat (nsec): min=531, max=3692.1M, avg=96600.24, stdev=5415865.57 00:06:13.083 lat (usec): min=9, max=3692.1k, avg=114.81, stdev=5485.49 00:06:13.083 clat percentiles (usec): 00:06:13.083 | 50.000th=[ 43], 99.000th=[ 742], 99.900th=[ 1549], 00:06:13.083 | 99.990th=[ 94897], 99.999th=[287310] 00:06:13.083 bw ( MiB/s): min= 598, max= 2909, per=100.00%, avg=1739.78, stdev=45.59, samples=293 00:06:13.083 iops : min=153269, max=744728, avg=445378.23, stdev=11670.52, samples=293 00:06:13.083 lat (nsec) : 750=0.01%, 1000=0.01% 00:06:13.083 lat (usec) : 2=0.64%, 4=14.52%, 10=18.62%, 20=17.20%, 50=21.94% 00:06:13.083 lat (usec) : 100=24.48%, 250=0.98%, 500=0.07%, 750=0.41%, 1000=1.01% 00:06:13.083 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.01% 00:06:13.083 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:06:13.083 lat (msec) : >=2000=0.01% 00:06:13.083 cpu : usr=55.64%, sys=3.49%, ctx=939223, majf=0, minf=670 00:06:13.083 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:13.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:13.083 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:13.083 issued rwts: total=2606908,4443551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:06:13.083 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:13.083 00:06:13.083 Run status group 0 (all jobs): 00:06:13.083 READ: bw=1018MiB/s (1068MB/s), 1018MiB/s-1018MiB/s (1068MB/s-1068MB/s), io=9.94GiB (10.7GB), run=10002-10002msec 00:06:13.083 WRITE: bw=1735MiB/s (1820MB/s), 1735MiB/s-1735MiB/s (1820MB/s-1820MB/s), io=17.0GiB (18.2GB), run=10002-10002msec 00:06:13.083 00:06:13.083 real 0m12.422s 00:06:13.083 user 1m33.355s 00:06:13.083 sys 0m8.828s 00:06:13.083 19:06:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.083 ************************************ 00:06:13.083 END TEST bdev_fio_rw_verify 00:06:13.083 ************************************ 00:06:13.083 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:13.083 19:06:49 -- bdev/blockdev.sh@348 -- # rm -f 00:06:13.083 19:06:49 -- bdev/blockdev.sh@349 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:13.083 19:06:49 -- bdev/blockdev.sh@352 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:06:13.083 19:06:49 -- common/autotest_common.sh@1257 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:13.083 19:06:49 -- common/autotest_common.sh@1258 -- # local workload=trim 00:06:13.083 19:06:49 -- common/autotest_common.sh@1259 -- # local bdev_type= 00:06:13.083 19:06:49 -- common/autotest_common.sh@1260 -- # local env_context= 00:06:13.083 19:06:49 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:06:13.083 19:06:49 -- common/autotest_common.sh@1263 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:06:13.083 19:06:49 -- common/autotest_common.sh@1268 -- # '[' -z trim ']' 00:06:13.083 19:06:49 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:06:13.083 19:06:49 -- common/autotest_common.sh@1276 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:13.083 19:06:49 -- common/autotest_common.sh@1278 -- # cat 00:06:13.083 19:06:49 -- common/autotest_common.sh@1290 -- # '[' trim == verify ']' 00:06:13.083 19:06:49 -- common/autotest_common.sh@1305 -- # '[' trim == trim ']' 00:06:13.083 19:06:49 -- common/autotest_common.sh@1306 -- # echo rw=trimwrite 00:06:13.083 19:06:49 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2a0943a5-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2a0943a5-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f3539a71-288f-1553-a6db-2496fbb846b4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f3539a71-288f-1553-a6db-2496fbb846b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e135c30d-9858-fe58-b3b1-ef670f966f44"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e135c30d-9858-fe58-b3b1-ef670f966f44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "4616951a-9741-905c-8b97-7f68f3c4fecf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4616951a-9741-905c-8b97-7f68f3c4fecf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "3fd87ae9-0721-5452-ad8c-d31707088bf1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3fd87ae9-0721-5452-ad8c-d31707088bf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3ae1a254-0e8f-7755-bbcf-3a5f2d3a6d34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3ae1a254-0e8f-7755-bbcf-3a5f2d3a6d34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "19d8abea-4b82-0b54-ad63-9d567afd8802"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19d8abea-4b82-0b54-ad63-9d567afd8802",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "4b2834fb-7115-f052-aa26-24724906f6bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4b2834fb-7115-f052-aa26-24724906f6bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "736d1923-46df-5856-8cb8-51dea0c4c5ff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "736d1923-46df-5856-8cb8-51dea0c4c5ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "10e79759-a765-5350-b0b3-1294a0a48548"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10e79759-a765-5350-b0b3-1294a0a48548",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a648cb98-ffe1-d157-a760-2f20218ef32c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a648cb98-ffe1-d157-a760-2f20218ef32c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "8f40a498-7932-2c57-bad4-7b0ca0321750"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8f40a498-7932-2c57-bad4-7b0ca0321750",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2a0e24d4-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2a0f5d4e-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "2a17e977-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a17e977-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a17e977-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "2a1095c7-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "2a11ce4d-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2a19219b-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2a19219b-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a19219b-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2a1306cb-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "2a143f4c-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2a21124a-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2a21124a-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:06:13.084 Malloc1p0 00:06:13.084 Malloc1p1 00:06:13.084 Malloc2p0 00:06:13.084 Malloc2p1 00:06:13.084 Malloc2p2 00:06:13.084 Malloc2p3 00:06:13.084 Malloc2p4 00:06:13.084 Malloc2p5 00:06:13.084 Malloc2p6 00:06:13.084 Malloc2p7 00:06:13.084 TestPT 00:06:13.084 raid0 00:06:13.084 concat0 ]] 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2a0943a5-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2a0943a5-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "f3539a71-288f-1553-a6db-2496fbb846b4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "f3539a71-288f-1553-a6db-2496fbb846b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e135c30d-9858-fe58-b3b1-ef670f966f44"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e135c30d-9858-fe58-b3b1-ef670f966f44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "4616951a-9741-905c-8b97-7f68f3c4fecf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4616951a-9741-905c-8b97-7f68f3c4fecf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "3fd87ae9-0721-5452-ad8c-d31707088bf1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3fd87ae9-0721-5452-ad8c-d31707088bf1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "3ae1a254-0e8f-7755-bbcf-3a5f2d3a6d34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3ae1a254-0e8f-7755-bbcf-3a5f2d3a6d34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "19d8abea-4b82-0b54-ad63-9d567afd8802"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19d8abea-4b82-0b54-ad63-9d567afd8802",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "4b2834fb-7115-f052-aa26-24724906f6bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4b2834fb-7115-f052-aa26-24724906f6bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "736d1923-46df-5856-8cb8-51dea0c4c5ff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "736d1923-46df-5856-8cb8-51dea0c4c5ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "10e79759-a765-5350-b0b3-1294a0a48548"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "10e79759-a765-5350-b0b3-1294a0a48548",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "a648cb98-ffe1-d157-a760-2f20218ef32c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a648cb98-ffe1-d157-a760-2f20218ef32c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "8f40a498-7932-2c57-bad4-7b0ca0321750"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8f40a498-7932-2c57-bad4-7b0ca0321750",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a16ba9a-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2a0e24d4-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "2a0f5d4e-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "2a17e977-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2a17e977-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a17e977-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "2a1095c7-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "2a11ce4d-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "2a19219b-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2a19219b-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "2a19219b-cb6c-11ee-af6b-4feeebbbadda",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "2a1306cb-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "2a143f4c-cb6c-11ee-af6b-4feeebbbadda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "2a21124a-cb6c-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "2a21124a-cb6c-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:06:13.084 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:06:13.084 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.084 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:06:13.085 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:06:13.085 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.085 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:06:13.085 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:06:13.085 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.085 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:06:13.085 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:06:13.085 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.085 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:06:13.085 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:06:13.085 19:06:49 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:06:13.085 19:06:49 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:06:13.085 19:06:49 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:06:13.085 19:06:49 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:13.085 19:06:49 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:06:13.085 19:06:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:13.085 19:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:13.085 ************************************ 00:06:13.085 START TEST bdev_fio_trim 00:06:13.085 ************************************ 00:06:13.085 19:06:49 -- common/autotest_common.sh@1102 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:13.085 19:06:49 -- common/autotest_common.sh@1333 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:13.085 19:06:49 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:06:13.085 19:06:49 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:06:13.085 19:06:49 -- common/autotest_common.sh@1316 -- # local sanitizers 00:06:13.085 19:06:49 -- common/autotest_common.sh@1317 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:13.085 19:06:49 -- common/autotest_common.sh@1318 -- # shift 00:06:13.085 19:06:49 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:06:13.085 19:06:49 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # grep libasan 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # asan_lib= 00:06:13.085 19:06:49 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:06:13.085 19:06:49 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:06:13.085 19:06:49 -- common/autotest_common.sh@1322 -- # asan_lib= 00:06:13.085 19:06:49 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:06:13.085 19:06:49 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:06:13.085 19:06:49 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:06:13.085 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:06:13.085 fio-3.35 00:06:13.085 Starting 14 threads 00:06:13.344 EAL: TSC is not safe to use in SMP mode 00:06:13.344 EAL: TSC is not invariant 00:06:25.554 00:06:25.554 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102715: Wed Feb 14 19:07:01 2024 00:06:25.554 write: IOPS=2684k, BW=10.2GiB/s (11.0GB/s)(102GiB/10002msec); 0 zone resets 00:06:25.554 slat (nsec): min=217, max=2195.6M, avg=1090.63, stdev=452700.86 00:06:25.554 clat (nsec): min=1116, max=2195.6M, avg=14200.04, stdev=1619477.20 00:06:25.554 lat (nsec): min=1615, max=2195.6M, avg=15290.67, stdev=1681559.23 00:06:25.554 clat percentiles (usec): 00:06:25.554 | 50.000th=[ 6], 99.000th=[ 15], 99.900th=[ 947], 99.990th=[ 963], 00:06:25.554 | 99.999th=[94897] 00:06:25.554 bw ( MiB/s): min= 4345, max=17417, per=100.00%, avg=10801.55, stdev=305.95, samples=255 00:06:25.554 iops : min=1112341, max=4458960, avg=2765192.09, stdev=78322.87, samples=255 00:06:25.554 trim: IOPS=2684k, BW=10.2GiB/s (11.0GB/s)(102GiB/10002msec); 0 zone resets 00:06:25.554 slat (nsec): min=437, max=1378.0M, avg=1833.66, stdev=372053.48 00:06:25.554 clat (nsec): min=305, max=2195.6M, avg=9922.49, stdev=1119718.74 00:06:25.554 lat (nsec): min=1710, max=2195.6M, avg=11756.15, stdev=1179917.73 00:06:25.554 clat percentiles (usec): 00:06:25.554 | 50.000th=[ 7], 99.000th=[ 15], 99.900th=[ 26], 99.990th=[ 39], 00:06:25.554 | 99.999th=[94897] 00:06:25.554 bw ( MiB/s): min= 4345, max=17417, per=100.00%, avg=10801.56, stdev=305.95, samples=255 00:06:25.554 iops : min=1112339, max=4458964, avg=2765193.92, stdev=78322.88, samples=255 00:06:25.554 lat (nsec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:06:25.554 lat (usec) : 2=3.42%, 4=24.99%, 10=61.11%, 20=9.99%, 50=0.29% 00:06:25.554 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.15% 00:06:25.554 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:06:25.554 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:06:25.554 lat (msec) : 2000=0.01%, >=2000=0.01% 00:06:25.554 cpu : usr=63.09%, sys=4.79%, ctx=1233240, majf=0, minf=0 00:06:25.554 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:25.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:25.554 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:25.554 issued rwts: total=0,26843941,26843947,0 short=0,0,0,0 dropped=0,0,0,0 00:06:25.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:25.554 00:06:25.554 Run status group 0 (all jobs): 00:06:25.554 WRITE: bw=10.2GiB/s (11.0GB/s), 10.2GiB/s-10.2GiB/s (11.0GB/s-11.0GB/s), io=102GiB (110GB), run=10002-10002msec 00:06:25.554 TRIM: bw=10.2GiB/s (11.0GB/s), 10.2GiB/s-10.2GiB/s (11.0GB/s-11.0GB/s), io=102GiB (110GB), run=10002-10002msec 00:06:25.554 00:06:25.554 real 0m12.475s 00:06:25.554 user 1m33.572s 00:06:25.554 sys 0m10.090s 00:06:25.554 ************************************ 00:06:25.554 END TEST bdev_fio_trim 00:06:25.554 ************************************ 00:06:25.554 19:07:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.554 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.554 19:07:02 -- bdev/blockdev.sh@366 -- # rm -f 00:06:25.554 19:07:02 -- bdev/blockdev.sh@367 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:25.554 /usr/home/vagrant/spdk_repo/spdk 00:06:25.554 19:07:02 -- bdev/blockdev.sh@368 -- # popd 00:06:25.554 19:07:02 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:06:25.554 00:06:25.554 real 0m25.925s 00:06:25.554 user 3m7.231s 00:06:25.554 sys 0m19.603s 00:06:25.554 19:07:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.554 ************************************ 00:06:25.554 END TEST bdev_fio 00:06:25.554 ************************************ 00:06:25.554 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.554 19:07:02 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:25.554 19:07:02 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.554 19:07:02 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:06:25.554 19:07:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:25.554 19:07:02 -- common/autotest_common.sh@10 -- # set +x 00:06:25.554 ************************************ 00:06:25.554 START TEST bdev_verify 00:06:25.554 ************************************ 00:06:25.554 19:07:02 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.554 [2024-02-14 19:07:02.468483] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:25.554 [2024-02-14 19:07:02.468749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:25.813 EAL: TSC is not safe to use in SMP mode 00:06:25.813 EAL: TSC is not invariant 00:06:25.813 [2024-02-14 19:07:03.208099] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.077 [2024-02-14 19:07:03.317842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.077 [2024-02-14 19:07:03.317909] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:26.077 [2024-02-14 19:07:03.317839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.077 [2024-02-14 19:07:03.377387] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:26.077 [2024-02-14 19:07:03.377422] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:26.077 [2024-02-14 19:07:03.385371] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:26.077 [2024-02-14 19:07:03.385387] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:26.077 [2024-02-14 19:07:03.393387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:26.077 [2024-02-14 19:07:03.393406] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:26.078 [2024-02-14 19:07:03.393413] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:26.078 [2024-02-14 19:07:03.441389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:26.078 [2024-02-14 19:07:03.441428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.078 [2024-02-14 19:07:03.441440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82a484800 00:06:26.078 [2024-02-14 19:07:03.441448] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.078 [2024-02-14 19:07:03.441761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.078 [2024-02-14 19:07:03.441787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:26.338 Running I/O for 5 seconds... 00:06:31.660 00:06:31.660 Latency(us) 00:06:31.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.660 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x1000 00:06:31.660 Malloc0 : 5.02 12626.91 49.32 0.00 0.00 10125.83 234.06 15666.21 00:06:31.660 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x1000 length 0x1000 00:06:31.660 Malloc0 : 5.03 57.50 0.22 0.00 0.00 2222942.97 928.43 3786854.89 00:06:31.660 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x800 00:06:31.660 Malloc1p0 : 5.02 8929.05 34.88 0.00 0.00 14319.46 423.25 14979.65 00:06:31.660 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x800 length 0x800 00:06:31.660 Malloc1p0 : 5.02 9975.20 38.97 0.00 0.00 12816.17 425.20 13981.00 00:06:31.660 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x800 00:06:31.660 Malloc1p1 : 5.02 8928.64 34.88 0.00 0.00 14317.61 405.70 14667.57 00:06:31.660 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x800 length 0x800 00:06:31.660 Malloc1p1 : 5.02 9974.77 38.96 0.00 0.00 12815.14 421.30 13668.93 00:06:31.660 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p0 : 5.02 8928.27 34.88 0.00 0.00 14315.77 401.80 14480.33 00:06:31.660 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p0 : 5.02 9974.45 38.96 0.00 0.00 12813.13 407.65 13481.68 00:06:31.660 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p1 : 5.02 8927.91 34.87 0.00 0.00 14313.82 394.00 14355.50 00:06:31.660 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p1 : 5.02 9974.14 38.96 0.00 0.00 12811.39 403.75 13169.61 00:06:31.660 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p2 : 5.02 8927.52 34.87 0.00 0.00 14312.14 390.09 14105.83 00:06:31.660 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p2 : 5.02 9973.79 38.96 0.00 0.00 12809.67 405.70 12919.95 00:06:31.660 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p3 : 5.02 8927.17 34.87 0.00 0.00 14310.61 386.19 13981.00 00:06:31.660 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p3 : 5.02 9973.49 38.96 0.00 0.00 12808.46 419.35 12732.70 00:06:31.660 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p4 : 5.02 8926.82 34.87 0.00 0.00 14309.04 427.15 13668.93 00:06:31.660 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p4 : 5.02 9973.17 38.96 0.00 0.00 12805.49 425.20 12545.45 00:06:31.660 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p5 : 5.02 8926.45 34.87 0.00 0.00 14306.78 395.95 13294.44 00:06:31.660 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p5 : 5.03 9972.87 38.96 0.00 0.00 12804.30 411.55 12233.38 00:06:31.660 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p6 : 5.02 8926.09 34.87 0.00 0.00 14305.38 409.60 13044.78 00:06:31.660 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p6 : 5.03 9972.57 38.96 0.00 0.00 12802.34 405.70 12046.13 00:06:31.660 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x200 00:06:31.660 Malloc2p7 : 5.02 8925.75 34.87 0.00 0.00 14302.78 395.95 12732.70 00:06:31.660 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x200 length 0x200 00:06:31.660 Malloc2p7 : 5.03 9972.28 38.95 0.00 0.00 12800.68 399.85 11796.47 00:06:31.660 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x1000 00:06:31.660 TestPT : 5.02 8910.52 34.81 0.00 0.00 14324.03 807.50 12982.36 00:06:31.660 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x1000 length 0x1000 00:06:31.660 TestPT : 5.03 6446.71 25.18 0.00 0.00 19799.59 1053.26 34952.51 00:06:31.660 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x2000 00:06:31.660 raid0 : 5.02 8925.05 34.86 0.00 0.00 14298.52 425.20 12108.55 00:06:31.660 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x2000 length 0x2000 00:06:31.660 raid0 : 5.03 9971.61 38.95 0.00 0.00 12796.60 434.96 10860.24 00:06:31.660 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x2000 00:06:31.660 concat0 : 5.02 8924.70 34.86 0.00 0.00 14297.45 399.85 11796.47 00:06:31.660 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x2000 length 0x2000 00:06:31.660 concat0 : 5.03 9971.31 38.95 0.00 0.00 12795.01 417.40 10922.66 00:06:31.660 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x1000 00:06:31.660 raid1 : 5.02 8924.27 34.86 0.00 0.00 14295.05 479.82 11796.47 00:06:31.660 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x1000 length 0x1000 00:06:31.660 raid1 : 5.03 9971.01 38.95 0.00 0.00 12793.36 530.53 11047.49 00:06:31.660 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x0 length 0x4e2 00:06:31.660 AIO0 : 5.07 1287.32 5.03 0.00 0.00 98741.34 7677.07 206719.14 00:06:31.660 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.660 Verification LBA range: start 0x4e2 length 0x4e2 00:06:31.661 AIO0 : 5.07 1287.67 5.03 0.00 0.00 98689.29 6272.73 226692.00 00:06:31.661 =================================================================================================================== 00:06:31.661 Total : 276314.99 1079.36 0.00 0.00 14793.90 234.06 3786854.89 00:06:31.661 [2024-02-14 19:07:08.656186] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:31.661 00:06:31.661 real 0m6.530s 00:06:31.661 user 0m10.953s 00:06:31.661 sys 0m0.873s 00:06:31.661 19:07:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.661 19:07:08 -- common/autotest_common.sh@10 -- # set +x 00:06:31.661 ************************************ 00:06:31.661 END TEST bdev_verify 00:06:31.661 ************************************ 00:06:31.661 19:07:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.661 19:07:09 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:06:31.661 19:07:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:31.661 19:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:31.661 ************************************ 00:06:31.661 START TEST bdev_verify_big_io 00:06:31.661 ************************************ 00:06:31.661 19:07:09 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:31.661 [2024-02-14 19:07:09.041105] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:31.661 [2024-02-14 19:07:09.041362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:32.594 EAL: TSC is not safe to use in SMP mode 00:06:32.594 EAL: TSC is not invariant 00:06:32.594 [2024-02-14 19:07:09.826119] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.594 [2024-02-14 19:07:09.957215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.594 [2024-02-14 19:07:09.957308] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:32.594 [2024-02-14 19:07:09.957208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.852 [2024-02-14 19:07:10.018869] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:32.852 [2024-02-14 19:07:10.018938] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:32.852 [2024-02-14 19:07:10.026849] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:32.852 [2024-02-14 19:07:10.026874] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:32.852 [2024-02-14 19:07:10.034869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:32.852 [2024-02-14 19:07:10.034898] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:32.852 [2024-02-14 19:07:10.034909] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:32.852 [2024-02-14 19:07:10.082877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:32.852 [2024-02-14 19:07:10.082939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.852 [2024-02-14 19:07:10.082952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d474800 00:06:32.852 [2024-02-14 19:07:10.082960] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.852 [2024-02-14 19:07:10.083347] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.852 [2024-02-14 19:07:10.083370] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:32.853 [2024-02-14 19:07:10.184000] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184157] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184252] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184343] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184466] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184619] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184753] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.184890] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185024] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185156] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185298] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185433] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185559] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185697] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185834] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.185965] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:32.853 [2024-02-14 19:07:10.187611] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:32.853 [2024-02-14 19:07:10.187799] bdevperf.c:1812:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:32.853 Running I/O for 5 seconds... 00:06:38.122 00:06:38.122 Latency(us) 00:06:38.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.122 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x0 length 0x100 00:06:38.122 Malloc0 : 5.07 3574.50 223.41 0.00 0.00 35665.17 2356.17 125829.04 00:06:38.122 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x100 length 0x100 00:06:38.122 Malloc0 : 5.06 3936.12 246.01 0.00 0.00 32334.70 2387.38 152792.40 00:06:38.122 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x0 length 0x80 00:06:38.122 Malloc1p0 : 5.08 2377.42 148.59 0.00 0.00 53524.90 4681.14 73899.59 00:06:38.122 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x80 length 0x80 00:06:38.122 Malloc1p0 : 5.10 1015.35 63.46 0.00 0.00 125153.07 4618.72 240673.00 00:06:38.122 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x0 length 0x80 00:06:38.122 Malloc1p1 : 5.10 926.23 57.89 0.00 0.00 137200.98 3760.52 257649.94 00:06:38.122 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x80 length 0x80 00:06:38.122 Malloc1p1 : 5.09 1871.03 116.94 0.00 0.00 67812.23 4556.31 189742.20 00:06:38.122 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x0 length 0x20 00:06:38.122 Malloc2p0 : 5.07 608.49 38.03 0.00 0.00 52181.37 1092.27 69905.02 00:06:38.122 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x20 length 0x20 00:06:38.122 Malloc2p0 : 5.08 674.33 42.15 0.00 0.00 47071.40 1349.73 66409.77 00:06:38.122 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x0 length 0x20 00:06:38.122 Malloc2p1 : 5.07 608.46 38.03 0.00 0.00 52161.55 1084.46 69905.02 00:06:38.122 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x20 length 0x20 00:06:38.122 Malloc2p1 : 5.08 674.28 42.14 0.00 0.00 47040.49 1326.32 67408.41 00:06:38.122 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x0 length 0x20 00:06:38.122 Malloc2p2 : 5.07 608.42 38.03 0.00 0.00 52141.15 1115.67 70404.34 00:06:38.122 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.122 Verification LBA range: start 0x20 length 0x20 00:06:38.123 Malloc2p2 : 5.08 674.24 42.14 0.00 0.00 47019.30 1115.67 67907.74 00:06:38.123 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x20 00:06:38.123 Malloc2p3 : 5.07 608.39 38.02 0.00 0.00 52122.33 1100.07 70404.34 00:06:38.123 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x20 length 0x20 00:06:38.123 Malloc2p3 : 5.08 674.20 42.14 0.00 0.00 46997.91 1131.28 67907.74 00:06:38.123 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x20 00:06:38.123 Malloc2p4 : 5.07 608.36 38.02 0.00 0.00 52099.52 1131.28 70404.34 00:06:38.123 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x20 length 0x20 00:06:38.123 Malloc2p4 : 5.08 674.16 42.13 0.00 0.00 46979.60 1123.47 68407.06 00:06:38.123 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x20 00:06:38.123 Malloc2p5 : 5.07 608.33 38.02 0.00 0.00 52072.02 1146.88 70404.34 00:06:38.123 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x20 length 0x20 00:06:38.123 Malloc2p5 : 5.08 674.11 42.13 0.00 0.00 46960.49 1115.67 68906.38 00:06:38.123 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x20 00:06:38.123 Malloc2p6 : 5.07 608.29 38.02 0.00 0.00 52051.38 1170.28 70404.34 00:06:38.123 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x20 length 0x20 00:06:38.123 Malloc2p6 : 5.08 674.07 42.13 0.00 0.00 46941.89 1076.66 69405.70 00:06:38.123 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x20 00:06:38.123 Malloc2p7 : 5.08 611.54 38.22 0.00 0.00 51801.55 1131.28 70903.66 00:06:38.123 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x20 length 0x20 00:06:38.123 Malloc2p7 : 5.08 674.03 42.13 0.00 0.00 46921.32 1146.88 69405.70 00:06:38.123 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x100 00:06:38.123 TestPT : 5.12 918.77 57.42 0.00 0.00 137252.40 3042.74 257649.94 00:06:38.123 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x100 length 0x100 00:06:38.123 TestPT : 5.20 75.33 4.71 0.00 0.00 1668700.25 17975.58 3323484.46 00:06:38.123 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x200 00:06:38.123 raid0 : 5.10 931.63 58.23 0.00 0.00 135706.14 3869.74 257649.94 00:06:38.123 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x200 length 0x200 00:06:38.123 raid0 : 5.10 1027.57 64.22 0.00 0.00 122915.65 3838.53 239674.36 00:06:38.123 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x200 00:06:38.123 concat0 : 5.10 931.61 58.23 0.00 0.00 135448.86 3744.91 257649.94 00:06:38.123 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x200 length 0x200 00:06:38.123 concat0 : 5.10 1027.55 64.22 0.00 0.00 122679.62 3791.72 239674.36 00:06:38.123 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x100 00:06:38.123 raid1 : 5.10 937.04 58.56 0.00 0.00 134471.66 4556.31 257649.94 00:06:38.123 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x100 length 0x100 00:06:38.123 raid1 : 5.10 1036.11 64.76 0.00 0.00 121532.45 5305.29 240673.00 00:06:38.123 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x0 length 0x4e 00:06:38.123 AIO0 : 5.10 923.85 57.74 0.00 0.00 82960.69 1458.96 146800.55 00:06:38.123 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:38.123 Verification LBA range: start 0x4e length 0x4e 00:06:38.123 AIO0 : 5.10 1016.40 63.52 0.00 0.00 75362.00 2527.82 138811.40 00:06:38.123 =================================================================================================================== 00:06:38.123 Total : 32790.21 2049.39 0.00 0.00 74356.88 1076.66 3323484.46 00:06:38.123 [2024-02-14 19:07:15.469827] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:38.691 00:06:38.691 real 0m6.765s 00:06:38.691 user 0m11.428s 00:06:38.691 sys 0m1.017s 00:06:38.691 19:07:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.691 ************************************ 00:06:38.691 END TEST bdev_verify_big_io 00:06:38.691 ************************************ 00:06:38.691 19:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:38.691 19:07:15 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.691 19:07:15 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:38.691 19:07:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:38.691 19:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:38.691 ************************************ 00:06:38.691 START TEST bdev_write_zeroes 00:06:38.691 ************************************ 00:06:38.691 19:07:15 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:38.691 [2024-02-14 19:07:15.850938] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:38.691 [2024-02-14 19:07:15.851154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:39.259 EAL: TSC is not safe to use in SMP mode 00:06:39.259 EAL: TSC is not invariant 00:06:39.259 [2024-02-14 19:07:16.612139] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.520 [2024-02-14 19:07:16.727658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.520 [2024-02-14 19:07:16.727744] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:39.520 [2024-02-14 19:07:16.787800] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:39.520 [2024-02-14 19:07:16.787874] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:39.520 [2024-02-14 19:07:16.795787] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:39.520 [2024-02-14 19:07:16.795820] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:39.520 [2024-02-14 19:07:16.803807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:39.520 [2024-02-14 19:07:16.803826] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:39.520 [2024-02-14 19:07:16.803833] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:39.520 [2024-02-14 19:07:16.851811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:39.520 [2024-02-14 19:07:16.851899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:39.520 [2024-02-14 19:07:16.851913] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d108800 00:06:39.520 [2024-02-14 19:07:16.851922] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:39.520 [2024-02-14 19:07:16.852423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:39.520 [2024-02-14 19:07:16.852450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:39.779 Running I/O for 1 seconds... 00:06:40.714 00:06:40.714 Latency(us) 00:06:40.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.714 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc0 : 1.01 22369.65 87.38 0.00 0.00 5720.61 248.69 9924.02 00:06:40.714 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc1p0 : 1.01 22366.31 87.37 0.00 0.00 5718.85 267.22 9736.77 00:06:40.714 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc1p1 : 1.01 22364.07 87.36 0.00 0.00 5716.43 267.22 9736.77 00:06:40.714 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p0 : 1.01 22361.05 87.35 0.00 0.00 5713.58 259.41 9549.53 00:06:40.714 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p1 : 1.01 22358.50 87.34 0.00 0.00 5711.30 263.31 9362.28 00:06:40.714 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p2 : 1.01 22356.24 87.33 0.00 0.00 5709.10 259.41 9175.03 00:06:40.714 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p3 : 1.01 22352.82 87.32 0.00 0.00 5706.67 271.12 9050.20 00:06:40.714 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p4 : 1.01 22403.51 87.51 0.00 0.00 5690.54 265.26 8925.37 00:06:40.714 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p5 : 1.01 22401.21 87.50 0.00 0.00 5687.62 265.26 8800.54 00:06:40.714 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p6 : 1.01 22398.82 87.50 0.00 0.00 5684.81 265.26 8550.88 00:06:40.714 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 Malloc2p7 : 1.01 22395.94 87.48 0.00 0.00 5682.44 271.12 8426.05 00:06:40.714 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 TestPT : 1.01 22393.66 87.48 0.00 0.00 5680.08 273.07 8176.39 00:06:40.714 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 raid0 : 1.01 22390.54 87.46 0.00 0.00 5677.12 306.22 8051.56 00:06:40.714 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 concat0 : 1.01 22387.26 87.45 0.00 0.00 5674.46 296.47 7895.52 00:06:40.714 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 raid1 : 1.01 22382.66 87.43 0.00 0.00 5670.50 557.84 7489.82 00:06:40.714 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:40.714 AIO0 : 1.04 3828.35 14.95 0.00 0.00 32621.99 550.03 119337.86 00:06:40.714 =================================================================================================================== 00:06:40.714 Total : 339510.59 1326.21 0.00 0.00 6009.74 248.69 119337.86 00:06:40.714 [2024-02-14 19:07:18.025884] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:40.973 00:06:40.973 real 0m2.510s 00:06:40.973 user 0m1.518s 00:06:40.973 sys 0m0.851s 00:06:40.973 19:07:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.973 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:40.973 ************************************ 00:06:40.973 END TEST bdev_write_zeroes 00:06:40.973 ************************************ 00:06:40.973 19:07:18 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.973 19:07:18 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:40.973 19:07:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:40.973 19:07:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 ************************************ 00:06:41.232 START TEST bdev_json_nonenclosed 00:06:41.232 ************************************ 00:06:41.232 19:07:18 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.232 [2024-02-14 19:07:18.404208] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:41.232 [2024-02-14 19:07:18.404559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:41.801 EAL: TSC is not safe to use in SMP mode 00:06:41.801 EAL: TSC is not invariant 00:06:41.801 [2024-02-14 19:07:19.158318] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.060 [2024-02-14 19:07:19.269436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.060 [2024-02-14 19:07:19.269496] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:42.060 [2024-02-14 19:07:19.269555] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:42.060 [2024-02-14 19:07:19.269564] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.060 [2024-02-14 19:07:19.269572] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.060 [2024-02-14 19:07:19.269584] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:42.060 00:06:42.060 real 0m1.015s 00:06:42.060 user 0m0.195s 00:06:42.060 sys 0m0.819s 00:06:42.060 19:07:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.060 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.060 ************************************ 00:06:42.060 END TEST bdev_json_nonenclosed 00:06:42.060 ************************************ 00:06:42.060 19:07:19 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.060 19:07:19 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:06:42.060 19:07:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:42.060 19:07:19 -- common/autotest_common.sh@10 -- # set +x 00:06:42.060 ************************************ 00:06:42.060 START TEST bdev_json_nonarray 00:06:42.060 ************************************ 00:06:42.060 19:07:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:42.060 [2024-02-14 19:07:19.463296] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:42.060 [2024-02-14 19:07:19.463556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:42.996 EAL: TSC is not safe to use in SMP mode 00:06:42.996 EAL: TSC is not invariant 00:06:42.996 [2024-02-14 19:07:20.220314] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.996 [2024-02-14 19:07:20.333788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.996 [2024-02-14 19:07:20.333862] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:06:42.996 [2024-02-14 19:07:20.333935] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:42.996 [2024-02-14 19:07:20.333944] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:42.996 [2024-02-14 19:07:20.333952] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.996 [2024-02-14 19:07:20.333965] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:06:43.255 00:06:43.255 real 0m1.019s 00:06:43.255 user 0m0.214s 00:06:43.255 sys 0m0.803s 00:06:43.255 19:07:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.255 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:43.255 ************************************ 00:06:43.255 END TEST bdev_json_nonarray 00:06:43.255 ************************************ 00:06:43.255 19:07:20 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:06:43.255 19:07:20 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:06:43.255 19:07:20 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:06:43.255 19:07:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:43.255 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:43.255 ************************************ 00:06:43.255 START TEST bdev_qos 00:06:43.255 ************************************ 00:06:43.255 19:07:20 -- common/autotest_common.sh@1102 -- # qos_test_suite '' 00:06:43.256 19:07:20 -- bdev/blockdev.sh@444 -- # QOS_PID=48429 00:06:43.256 19:07:20 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 48429' 00:06:43.256 Process qos testing pid: 48429 00:06:43.256 19:07:20 -- bdev/blockdev.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:43.256 19:07:20 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:43.256 19:07:20 -- bdev/blockdev.sh@447 -- # waitforlisten 48429 00:06:43.256 19:07:20 -- common/autotest_common.sh@817 -- # '[' -z 48429 ']' 00:06:43.256 19:07:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.256 19:07:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:43.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.256 19:07:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.256 19:07:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:43.256 19:07:20 -- common/autotest_common.sh@10 -- # set +x 00:06:43.256 [2024-02-14 19:07:20.532013] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:06:43.256 [2024-02-14 19:07:20.532344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:44.193 EAL: TSC is not safe to use in SMP mode 00:06:44.193 EAL: TSC is not invariant 00:06:44.193 [2024-02-14 19:07:21.260974] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.193 [2024-02-14 19:07:21.387819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.193 19:07:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.193 19:07:21 -- common/autotest_common.sh@850 -- # return 0 00:06:44.193 19:07:21 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:44.193 19:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.193 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.193 Malloc_0 00:06:44.193 19:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.193 19:07:21 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:06:44.193 19:07:21 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:06:44.193 19:07:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:06:44.193 19:07:21 -- common/autotest_common.sh@887 -- # local i 00:06:44.193 19:07:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:06:44.193 19:07:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:06:44.193 19:07:21 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:06:44.193 19:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.193 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.193 19:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.193 19:07:21 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:44.193 19:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.194 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.194 [ 00:06:44.194 { 00:06:44.194 "name": "Malloc_0", 00:06:44.194 "aliases": [ 00:06:44.194 "47bd43a5-cb6c-11ee-af6b-4feeebbbadda" 00:06:44.194 ], 00:06:44.194 "product_name": "Malloc disk", 00:06:44.194 "block_size": 512, 00:06:44.194 "num_blocks": 262144, 00:06:44.194 "uuid": "47bd43a5-cb6c-11ee-af6b-4feeebbbadda", 00:06:44.194 "assigned_rate_limits": { 00:06:44.194 "rw_ios_per_sec": 0, 00:06:44.194 "rw_mbytes_per_sec": 0, 00:06:44.194 "r_mbytes_per_sec": 0, 00:06:44.194 "w_mbytes_per_sec": 0 00:06:44.194 }, 00:06:44.194 "claimed": false, 00:06:44.194 "zoned": false, 00:06:44.194 "supported_io_types": { 00:06:44.194 "read": true, 00:06:44.194 "write": true, 00:06:44.194 "unmap": true, 00:06:44.194 "write_zeroes": true, 00:06:44.194 "flush": true, 00:06:44.194 "reset": true, 00:06:44.194 "compare": false, 00:06:44.194 "compare_and_write": false, 00:06:44.194 "abort": true, 00:06:44.194 "nvme_admin": false, 00:06:44.194 "nvme_io": false 00:06:44.194 }, 00:06:44.194 "memory_domains": [ 00:06:44.194 { 00:06:44.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.194 "dma_device_type": 2 00:06:44.194 } 00:06:44.194 ], 00:06:44.194 "driver_specific": {} 00:06:44.194 } 00:06:44.194 ] 00:06:44.194 19:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.194 19:07:21 -- common/autotest_common.sh@893 -- # return 0 00:06:44.194 19:07:21 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:44.194 19:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.194 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.194 Null_1 00:06:44.194 19:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.194 19:07:21 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:06:44.194 19:07:21 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:06:44.194 19:07:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:06:44.194 19:07:21 -- common/autotest_common.sh@887 -- # local i 00:06:44.194 19:07:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:06:44.194 19:07:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:06:44.194 19:07:21 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:06:44.194 19:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.194 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.194 19:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.194 19:07:21 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:44.194 19:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:44.194 19:07:21 -- common/autotest_common.sh@10 -- # set +x 00:06:44.453 [ 00:06:44.453 { 00:06:44.453 "name": "Null_1", 00:06:44.453 "aliases": [ 00:06:44.453 "47c2c15e-cb6c-11ee-af6b-4feeebbbadda" 00:06:44.453 ], 00:06:44.453 "product_name": "Null disk", 00:06:44.453 "block_size": 512, 00:06:44.453 "num_blocks": 262144, 00:06:44.453 "uuid": "47c2c15e-cb6c-11ee-af6b-4feeebbbadda", 00:06:44.453 "assigned_rate_limits": { 00:06:44.453 "rw_ios_per_sec": 0, 00:06:44.453 "rw_mbytes_per_sec": 0, 00:06:44.453 "r_mbytes_per_sec": 0, 00:06:44.453 "w_mbytes_per_sec": 0 00:06:44.453 }, 00:06:44.453 "claimed": false, 00:06:44.453 "zoned": false, 00:06:44.453 "supported_io_types": { 00:06:44.453 "read": true, 00:06:44.453 "write": true, 00:06:44.453 "unmap": false, 00:06:44.453 "write_zeroes": true, 00:06:44.453 "flush": false, 00:06:44.453 "reset": true, 00:06:44.453 "compare": false, 00:06:44.453 "compare_and_write": false, 00:06:44.453 "abort": true, 00:06:44.453 "nvme_admin": false, 00:06:44.453 "nvme_io": false 00:06:44.453 }, 00:06:44.453 "driver_specific": {} 00:06:44.453 } 00:06:44.453 ] 00:06:44.453 19:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:44.453 19:07:21 -- common/autotest_common.sh@893 -- # return 0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@455 -- # qos_function_test 00:06:44.453 19:07:21 -- bdev/blockdev.sh@454 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:44.453 19:07:21 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:06:44.453 19:07:21 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:06:44.453 19:07:21 -- bdev/blockdev.sh@410 -- # local io_result=0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:06:44.453 19:07:21 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:44.453 19:07:21 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:06:44.453 19:07:21 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:44.453 19:07:21 -- bdev/blockdev.sh@376 -- # tail -1 00:06:44.453 Running I/O for 60 seconds... 00:06:49.761 19:07:27 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 650324.22 2601296.88 0.00 0.00 2803712.00 0.00 0.00 ' 00:06:49.761 19:07:27 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:06:49.761 19:07:27 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:06:49.761 19:07:27 -- bdev/blockdev.sh@378 -- # iostat_result=650324.22 00:06:49.761 19:07:27 -- bdev/blockdev.sh@383 -- # echo 650324 00:06:49.761 19:07:27 -- bdev/blockdev.sh@414 -- # io_result=650324 00:06:49.761 19:07:27 -- bdev/blockdev.sh@416 -- # iops_limit=162000 00:06:49.762 19:07:27 -- bdev/blockdev.sh@417 -- # '[' 162000 -gt 1000 ']' 00:06:49.762 19:07:27 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 162000 Malloc_0 00:06:49.762 19:07:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.762 19:07:27 -- common/autotest_common.sh@10 -- # set +x 00:06:49.762 19:07:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.762 19:07:27 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 162000 IOPS Malloc_0 00:06:49.762 19:07:27 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:06:49.762 19:07:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:49.762 19:07:27 -- common/autotest_common.sh@10 -- # set +x 00:06:49.762 ************************************ 00:06:49.762 START TEST bdev_qos_iops 00:06:49.762 ************************************ 00:06:49.762 19:07:27 -- common/autotest_common.sh@1102 -- # run_qos_test 162000 IOPS Malloc_0 00:06:49.762 19:07:27 -- bdev/blockdev.sh@387 -- # local qos_limit=162000 00:06:49.762 19:07:27 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:06:49.762 19:07:27 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:06:49.762 19:07:27 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:06:49.762 19:07:27 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:06:49.762 19:07:27 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:49.762 19:07:27 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:49.762 19:07:27 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:06:49.762 19:07:27 -- bdev/blockdev.sh@376 -- # tail -1 00:06:56.324 19:07:32 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 161953.53 647814.11 0.00 0.00 679104.00 0.00 0.00 ' 00:06:56.324 19:07:32 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:06:56.324 19:07:32 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:06:56.324 19:07:32 -- bdev/blockdev.sh@378 -- # iostat_result=161953.53 00:06:56.324 19:07:32 -- bdev/blockdev.sh@383 -- # echo 161953 00:06:56.324 19:07:32 -- bdev/blockdev.sh@390 -- # qos_result=161953 00:06:56.324 19:07:32 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:06:56.324 19:07:32 -- bdev/blockdev.sh@394 -- # lower_limit=145800 00:06:56.324 19:07:32 -- bdev/blockdev.sh@395 -- # upper_limit=178200 00:06:56.324 19:07:32 -- bdev/blockdev.sh@398 -- # '[' 161953 -lt 145800 ']' 00:06:56.324 19:07:32 -- bdev/blockdev.sh@398 -- # '[' 161953 -gt 178200 ']' 00:06:56.324 00:06:56.324 real 0m5.414s 00:06:56.324 user 0m0.137s 00:06:56.324 sys 0m0.034s 00:06:56.324 19:07:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.324 19:07:32 -- common/autotest_common.sh@10 -- # set +x 00:06:56.324 ************************************ 00:06:56.324 END TEST bdev_qos_iops 00:06:56.324 ************************************ 00:06:56.324 19:07:32 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:06:56.324 19:07:32 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:06:56.324 19:07:32 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:06:56.324 19:07:32 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:56.324 19:07:32 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:56.324 19:07:32 -- bdev/blockdev.sh@376 -- # grep Null_1 00:06:56.324 19:07:32 -- bdev/blockdev.sh@376 -- # tail -1 00:07:01.594 19:07:38 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 670867.49 2683469.97 0.00 0.00 2897920.00 0.00 0.00 ' 00:07:01.594 19:07:38 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:07:01.594 19:07:38 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:01.594 19:07:38 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:07:01.594 19:07:38 -- bdev/blockdev.sh@380 -- # iostat_result=2897920.00 00:07:01.594 19:07:38 -- bdev/blockdev.sh@383 -- # echo 2897920 00:07:01.594 19:07:38 -- bdev/blockdev.sh@425 -- # bw_limit=2897920 00:07:01.594 19:07:38 -- bdev/blockdev.sh@426 -- # bw_limit=283 00:07:01.594 19:07:38 -- bdev/blockdev.sh@427 -- # '[' 283 -lt 2 ']' 00:07:01.594 19:07:38 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 283 Null_1 00:07:01.594 19:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.594 19:07:38 -- common/autotest_common.sh@10 -- # set +x 00:07:01.594 19:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.594 19:07:38 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 283 BANDWIDTH Null_1 00:07:01.594 19:07:38 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:07:01.594 19:07:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:01.594 19:07:38 -- common/autotest_common.sh@10 -- # set +x 00:07:01.594 ************************************ 00:07:01.594 START TEST bdev_qos_bw 00:07:01.594 ************************************ 00:07:01.594 19:07:38 -- common/autotest_common.sh@1102 -- # run_qos_test 283 BANDWIDTH Null_1 00:07:01.594 19:07:38 -- bdev/blockdev.sh@387 -- # local qos_limit=283 00:07:01.594 19:07:38 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:07:01.594 19:07:38 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:07:01.594 19:07:38 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:07:01.594 19:07:38 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:07:01.594 19:07:38 -- bdev/blockdev.sh@375 -- # local iostat_result 00:07:01.594 19:07:38 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:07:01.594 19:07:38 -- bdev/blockdev.sh@376 -- # grep Null_1 00:07:01.594 19:07:38 -- bdev/blockdev.sh@376 -- # tail -1 00:07:06.864 19:07:43 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 72442.61 289770.44 0.00 0.00 312976.00 0.00 0.00 ' 00:07:06.864 19:07:43 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:07:06.864 19:07:43 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:06.864 19:07:43 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:07:06.864 19:07:43 -- bdev/blockdev.sh@380 -- # iostat_result=312976.00 00:07:06.864 19:07:43 -- bdev/blockdev.sh@383 -- # echo 312976 00:07:06.864 19:07:43 -- bdev/blockdev.sh@390 -- # qos_result=312976 00:07:06.865 19:07:43 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:06.865 19:07:43 -- bdev/blockdev.sh@392 -- # qos_limit=289792 00:07:06.865 19:07:43 -- bdev/blockdev.sh@394 -- # lower_limit=260812 00:07:06.865 19:07:43 -- bdev/blockdev.sh@395 -- # upper_limit=318771 00:07:06.865 19:07:43 -- bdev/blockdev.sh@398 -- # '[' 312976 -lt 260812 ']' 00:07:06.865 19:07:43 -- bdev/blockdev.sh@398 -- # '[' 312976 -gt 318771 ']' 00:07:06.865 00:07:06.865 real 0m5.530s 00:07:06.865 user 0m0.130s 00:07:06.865 sys 0m0.032s 00:07:06.865 19:07:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.865 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.865 ************************************ 00:07:06.865 END TEST bdev_qos_bw 00:07:06.865 ************************************ 00:07:06.865 19:07:43 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:07:06.865 19:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:06.865 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.865 19:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:06.865 19:07:43 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:07:06.865 19:07:43 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:07:06.865 19:07:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:06.865 19:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:06.865 ************************************ 00:07:06.865 START TEST bdev_qos_ro_bw 00:07:06.865 ************************************ 00:07:06.865 19:07:43 -- common/autotest_common.sh@1102 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:07:06.865 19:07:43 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:07:06.865 19:07:43 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:07:06.865 19:07:43 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:07:06.865 19:07:43 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:07:06.865 19:07:43 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:07:06.865 19:07:43 -- bdev/blockdev.sh@375 -- # local iostat_result 00:07:06.865 19:07:43 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:07:06.865 19:07:43 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:07:06.865 19:07:43 -- bdev/blockdev.sh@376 -- # tail -1 00:07:12.139 19:07:49 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.98 2047.93 0.00 0.00 2212.00 0.00 0.00 ' 00:07:12.139 19:07:49 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:07:12.139 19:07:49 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:12.139 19:07:49 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:07:12.139 19:07:49 -- bdev/blockdev.sh@380 -- # iostat_result=2212.00 00:07:12.139 19:07:49 -- bdev/blockdev.sh@383 -- # echo 2212 00:07:12.139 19:07:49 -- bdev/blockdev.sh@390 -- # qos_result=2212 00:07:12.139 19:07:49 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:07:12.139 19:07:49 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:07:12.139 19:07:49 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:07:12.139 19:07:49 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:07:12.139 19:07:49 -- bdev/blockdev.sh@398 -- # '[' 2212 -lt 1843 ']' 00:07:12.139 ************************************ 00:07:12.139 END TEST bdev_qos_ro_bw 00:07:12.139 ************************************ 00:07:12.139 19:07:49 -- bdev/blockdev.sh@398 -- # '[' 2212 -gt 2252 ']' 00:07:12.139 00:07:12.139 real 0m5.540s 00:07:12.139 user 0m0.137s 00:07:12.139 sys 0m0.023s 00:07:12.139 19:07:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.139 19:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.139 19:07:49 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:07:12.139 19:07:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.139 19:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.397 19:07:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.397 19:07:49 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:07:12.397 19:07:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.397 19:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.655 00:07:12.655 Latency(us) 00:07:12.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.655 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:12.655 Malloc_0 : 28.03 225835.91 882.17 0.00 0.00 1123.34 335.48 501318.87 00:07:12.655 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:12.655 Null_1 : 28.07 397259.14 1551.79 0.00 0.00 644.20 48.27 40195.39 00:07:12.655 =================================================================================================================== 00:07:12.655 Total : 623095.05 2433.97 0.00 0.00 817.68 48.27 501318.87 00:07:12.655 0 00:07:12.655 19:07:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.655 19:07:49 -- bdev/blockdev.sh@459 -- # killprocess 48429 00:07:12.655 19:07:49 -- common/autotest_common.sh@924 -- # '[' -z 48429 ']' 00:07:12.655 19:07:49 -- common/autotest_common.sh@928 -- # kill -0 48429 00:07:12.655 19:07:49 -- common/autotest_common.sh@929 -- # uname 00:07:12.655 19:07:49 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:12.655 19:07:49 -- common/autotest_common.sh@932 -- # ps -c -o command 48429 00:07:12.655 19:07:49 -- common/autotest_common.sh@932 -- # tail -1 00:07:12.655 19:07:49 -- common/autotest_common.sh@932 -- # process_name=bdevperf 00:07:12.655 killing process with pid 48429 00:07:12.655 19:07:49 -- common/autotest_common.sh@934 -- # '[' bdevperf = sudo ']' 00:07:12.655 19:07:49 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48429' 00:07:12.655 19:07:49 -- common/autotest_common.sh@943 -- # kill 48429 00:07:12.655 Received shutdown signal, test time was about 28.093324 seconds 00:07:12.655 00:07:12.655 Latency(us) 00:07:12.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.655 =================================================================================================================== 00:07:12.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:12.655 19:07:49 -- common/autotest_common.sh@948 -- # wait 48429 00:07:12.913 19:07:50 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:07:12.913 00:07:12.913 real 0m29.558s 00:07:12.913 user 0m30.016s 00:07:12.913 sys 0m1.097s 00:07:12.913 ************************************ 00:07:12.913 END TEST bdev_qos 00:07:12.913 ************************************ 00:07:12.913 19:07:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.913 19:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:12.913 19:07:50 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:07:12.913 19:07:50 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:12.913 19:07:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:12.913 19:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:12.913 ************************************ 00:07:12.913 START TEST bdev_qd_sampling 00:07:12.913 ************************************ 00:07:12.913 19:07:50 -- common/autotest_common.sh@1102 -- # qd_sampling_test_suite '' 00:07:12.913 19:07:50 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:07:12.913 19:07:50 -- bdev/blockdev.sh@539 -- # QD_PID=48542 00:07:12.914 Process bdev QD sampling period testing pid: 48542 00:07:12.914 19:07:50 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 48542' 00:07:12.914 19:07:50 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:07:12.914 19:07:50 -- bdev/blockdev.sh@542 -- # waitforlisten 48542 00:07:12.914 19:07:50 -- bdev/blockdev.sh@538 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:07:12.914 19:07:50 -- common/autotest_common.sh@817 -- # '[' -z 48542 ']' 00:07:12.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.914 19:07:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.914 19:07:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:12.914 19:07:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.914 19:07:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:12.914 19:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:12.914 [2024-02-14 19:07:50.130207] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:12.914 [2024-02-14 19:07:50.130548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:13.849 EAL: TSC is not safe to use in SMP mode 00:07:13.849 EAL: TSC is not invariant 00:07:13.849 [2024-02-14 19:07:50.925256] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.849 [2024-02-14 19:07:51.045909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.849 [2024-02-14 19:07:51.045905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.849 19:07:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:13.849 19:07:51 -- common/autotest_common.sh@850 -- # return 0 00:07:13.849 19:07:51 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:07:13.849 19:07:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.849 19:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.849 Malloc_QD 00:07:13.849 19:07:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.849 19:07:51 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:07:13.849 19:07:51 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:07:13.849 19:07:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:13.849 19:07:51 -- common/autotest_common.sh@887 -- # local i 00:07:13.849 19:07:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:13.849 19:07:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:13.849 19:07:51 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:07:13.849 19:07:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.849 19:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.849 19:07:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.849 19:07:51 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:07:13.849 19:07:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:13.849 19:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:13.849 [ 00:07:13.849 { 00:07:13.849 "name": "Malloc_QD", 00:07:13.849 "aliases": [ 00:07:13.849 "59641ab7-cb6c-11ee-af6b-4feeebbbadda" 00:07:13.849 ], 00:07:13.849 "product_name": "Malloc disk", 00:07:13.849 "block_size": 512, 00:07:13.849 "num_blocks": 262144, 00:07:13.849 "uuid": "59641ab7-cb6c-11ee-af6b-4feeebbbadda", 00:07:13.849 "assigned_rate_limits": { 00:07:13.849 "rw_ios_per_sec": 0, 00:07:13.849 "rw_mbytes_per_sec": 0, 00:07:13.849 "r_mbytes_per_sec": 0, 00:07:13.849 "w_mbytes_per_sec": 0 00:07:13.849 }, 00:07:13.849 "claimed": false, 00:07:13.849 "zoned": false, 00:07:13.849 "supported_io_types": { 00:07:13.849 "read": true, 00:07:13.849 "write": true, 00:07:13.849 "unmap": true, 00:07:13.849 "write_zeroes": true, 00:07:13.849 "flush": true, 00:07:13.849 "reset": true, 00:07:13.849 "compare": false, 00:07:13.849 "compare_and_write": false, 00:07:13.849 "abort": true, 00:07:13.849 "nvme_admin": false, 00:07:13.849 "nvme_io": false 00:07:13.849 }, 00:07:13.849 "memory_domains": [ 00:07:13.849 { 00:07:13.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.849 "dma_device_type": 2 00:07:13.849 } 00:07:13.849 ], 00:07:13.849 "driver_specific": {} 00:07:13.849 } 00:07:13.849 ] 00:07:13.849 19:07:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:13.849 19:07:51 -- common/autotest_common.sh@893 -- # return 0 00:07:13.849 19:07:51 -- bdev/blockdev.sh@548 -- # sleep 2 00:07:13.849 19:07:51 -- bdev/blockdev.sh@547 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:14.108 Running I/O for 5 seconds... 00:07:16.011 19:07:53 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:07:16.011 19:07:53 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:07:16.011 19:07:53 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:07:16.011 19:07:53 -- bdev/blockdev.sh@519 -- # local iostats 00:07:16.011 19:07:53 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:07:16.011 19:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.011 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.011 19:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.011 19:07:53 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:07:16.011 19:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.011 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.011 19:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.011 19:07:53 -- bdev/blockdev.sh@523 -- # iostats='{ 00:07:16.011 "tick_rate": 2100001353, 00:07:16.011 "ticks": 749882285114, 00:07:16.011 "bdevs": [ 00:07:16.011 { 00:07:16.011 "name": "Malloc_QD", 00:07:16.011 "bytes_read": 11944366592, 00:07:16.011 "num_read_ops": 2916099, 00:07:16.011 "bytes_written": 0, 00:07:16.011 "num_write_ops": 0, 00:07:16.011 "bytes_unmapped": 0, 00:07:16.011 "num_unmap_ops": 0, 00:07:16.011 "bytes_copied": 0, 00:07:16.011 "num_copy_ops": 0, 00:07:16.011 "read_latency_ticks": 2171905832914, 00:07:16.011 "max_read_latency_ticks": 2566640, 00:07:16.011 "min_read_latency_ticks": 41572, 00:07:16.011 "write_latency_ticks": 0, 00:07:16.011 "max_write_latency_ticks": 0, 00:07:16.011 "min_write_latency_ticks": 0, 00:07:16.011 "unmap_latency_ticks": 0, 00:07:16.011 "max_unmap_latency_ticks": 0, 00:07:16.011 "min_unmap_latency_ticks": 0, 00:07:16.011 "copy_latency_ticks": 0, 00:07:16.011 "max_copy_latency_ticks": 0, 00:07:16.011 "min_copy_latency_ticks": 0, 00:07:16.011 "io_error": {}, 00:07:16.011 "queue_depth_polling_period": 10, 00:07:16.011 "queue_depth": 512, 00:07:16.011 "io_time": 350, 00:07:16.011 "weighted_io_time": 189440 00:07:16.011 } 00:07:16.011 ] 00:07:16.011 }' 00:07:16.011 19:07:53 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:07:16.011 19:07:53 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:07:16.011 19:07:53 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:07:16.011 19:07:53 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:07:16.011 19:07:53 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:07:16.011 19:07:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.011 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.011 00:07:16.011 Latency(us) 00:07:16.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.011 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:16.011 Malloc_QD : 2.05 718979.48 2808.51 0.00 0.00 355.77 56.81 667.06 00:07:16.011 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:16.011 Malloc_QD : 2.05 723481.08 2826.10 0.00 0.00 353.55 67.78 1224.90 00:07:16.011 =================================================================================================================== 00:07:16.011 Total : 1442460.56 5634.61 0.00 0.00 354.66 56.81 1224.90 00:07:16.011 0 00:07:16.011 19:07:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.011 19:07:53 -- bdev/blockdev.sh@552 -- # killprocess 48542 00:07:16.011 19:07:53 -- common/autotest_common.sh@924 -- # '[' -z 48542 ']' 00:07:16.011 19:07:53 -- common/autotest_common.sh@928 -- # kill -0 48542 00:07:16.011 19:07:53 -- common/autotest_common.sh@929 -- # uname 00:07:16.011 19:07:53 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:16.011 19:07:53 -- common/autotest_common.sh@932 -- # ps -c -o command 48542 00:07:16.011 19:07:53 -- common/autotest_common.sh@932 -- # tail -1 00:07:16.011 19:07:53 -- common/autotest_common.sh@932 -- # process_name=bdevperf 00:07:16.011 killing process with pid 48542 00:07:16.011 19:07:53 -- common/autotest_common.sh@934 -- # '[' bdevperf = sudo ']' 00:07:16.011 19:07:53 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48542' 00:07:16.011 19:07:53 -- common/autotest_common.sh@943 -- # kill 48542 00:07:16.011 Received shutdown signal, test time was about 2.103055 seconds 00:07:16.011 00:07:16.011 Latency(us) 00:07:16.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.011 =================================================================================================================== 00:07:16.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.011 19:07:53 -- common/autotest_common.sh@948 -- # wait 48542 00:07:16.269 19:07:53 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:07:16.269 00:07:16.269 real 0m3.528s 00:07:16.269 user 0m5.669s 00:07:16.269 sys 0m0.985s 00:07:16.269 19:07:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.269 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.269 ************************************ 00:07:16.269 END TEST bdev_qd_sampling 00:07:16.269 ************************************ 00:07:16.527 19:07:53 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:07:16.527 19:07:53 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:16.527 19:07:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:16.527 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.527 ************************************ 00:07:16.527 START TEST bdev_error 00:07:16.527 ************************************ 00:07:16.527 19:07:53 -- common/autotest_common.sh@1102 -- # error_test_suite '' 00:07:16.527 19:07:53 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:07:16.527 19:07:53 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:07:16.527 19:07:53 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:07:16.527 19:07:53 -- bdev/blockdev.sh@470 -- # ERR_PID=48573 00:07:16.527 19:07:53 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 48573' 00:07:16.527 Process error testing pid: 48573 00:07:16.527 19:07:53 -- bdev/blockdev.sh@472 -- # waitforlisten 48573 00:07:16.527 19:07:53 -- common/autotest_common.sh@817 -- # '[' -z 48573 ']' 00:07:16.527 19:07:53 -- bdev/blockdev.sh@469 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:07:16.527 19:07:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.527 19:07:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:16.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.527 19:07:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.527 19:07:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:16.528 19:07:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.528 [2024-02-14 19:07:53.701341] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:16.528 [2024-02-14 19:07:53.701639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:17.095 EAL: TSC is not safe to use in SMP mode 00:07:17.095 EAL: TSC is not invariant 00:07:17.095 [2024-02-14 19:07:54.473175] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.352 [2024-02-14 19:07:54.603002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.611 19:07:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:17.611 19:07:54 -- common/autotest_common.sh@850 -- # return 0 00:07:17.611 19:07:54 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 Dev_1 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:07:17.611 19:07:54 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:07:17.611 19:07:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:17.611 19:07:54 -- common/autotest_common.sh@887 -- # local i 00:07:17.611 19:07:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:17.611 19:07:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:17.611 19:07:54 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 [ 00:07:17.611 { 00:07:17.611 "name": "Dev_1", 00:07:17.611 "aliases": [ 00:07:17.611 "5b8c2592-cb6c-11ee-af6b-4feeebbbadda" 00:07:17.611 ], 00:07:17.611 "product_name": "Malloc disk", 00:07:17.611 "block_size": 512, 00:07:17.611 "num_blocks": 262144, 00:07:17.611 "uuid": "5b8c2592-cb6c-11ee-af6b-4feeebbbadda", 00:07:17.611 "assigned_rate_limits": { 00:07:17.611 "rw_ios_per_sec": 0, 00:07:17.611 "rw_mbytes_per_sec": 0, 00:07:17.611 "r_mbytes_per_sec": 0, 00:07:17.611 "w_mbytes_per_sec": 0 00:07:17.611 }, 00:07:17.611 "claimed": false, 00:07:17.611 "zoned": false, 00:07:17.611 "supported_io_types": { 00:07:17.611 "read": true, 00:07:17.611 "write": true, 00:07:17.611 "unmap": true, 00:07:17.611 "write_zeroes": true, 00:07:17.611 "flush": true, 00:07:17.611 "reset": true, 00:07:17.611 "compare": false, 00:07:17.611 "compare_and_write": false, 00:07:17.611 "abort": true, 00:07:17.611 "nvme_admin": false, 00:07:17.611 "nvme_io": false 00:07:17.611 }, 00:07:17.611 "memory_domains": [ 00:07:17.611 { 00:07:17.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.611 "dma_device_type": 2 00:07:17.611 } 00:07:17.611 ], 00:07:17.611 "driver_specific": {} 00:07:17.611 } 00:07:17.611 ] 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- common/autotest_common.sh@893 -- # return 0 00:07:17.611 19:07:54 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 true 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 Dev_2 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:07:17.611 19:07:54 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:07:17.611 19:07:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:17.611 19:07:54 -- common/autotest_common.sh@887 -- # local i 00:07:17.611 19:07:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:17.611 19:07:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:17.611 19:07:54 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 [ 00:07:17.611 { 00:07:17.611 "name": "Dev_2", 00:07:17.611 "aliases": [ 00:07:17.611 "5b92dba0-cb6c-11ee-af6b-4feeebbbadda" 00:07:17.611 ], 00:07:17.611 "product_name": "Malloc disk", 00:07:17.611 "block_size": 512, 00:07:17.611 "num_blocks": 262144, 00:07:17.611 "uuid": "5b92dba0-cb6c-11ee-af6b-4feeebbbadda", 00:07:17.611 "assigned_rate_limits": { 00:07:17.611 "rw_ios_per_sec": 0, 00:07:17.611 "rw_mbytes_per_sec": 0, 00:07:17.611 "r_mbytes_per_sec": 0, 00:07:17.611 "w_mbytes_per_sec": 0 00:07:17.611 }, 00:07:17.611 "claimed": false, 00:07:17.611 "zoned": false, 00:07:17.611 "supported_io_types": { 00:07:17.611 "read": true, 00:07:17.611 "write": true, 00:07:17.611 "unmap": true, 00:07:17.611 "write_zeroes": true, 00:07:17.611 "flush": true, 00:07:17.611 "reset": true, 00:07:17.611 "compare": false, 00:07:17.611 "compare_and_write": false, 00:07:17.611 "abort": true, 00:07:17.611 "nvme_admin": false, 00:07:17.611 "nvme_io": false 00:07:17.611 }, 00:07:17.611 "memory_domains": [ 00:07:17.611 { 00:07:17.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.611 "dma_device_type": 2 00:07:17.611 } 00:07:17.611 ], 00:07:17.611 "driver_specific": {} 00:07:17.611 } 00:07:17.611 ] 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- common/autotest_common.sh@893 -- # return 0 00:07:17.611 19:07:54 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:17.611 19:07:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.611 19:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.611 19:07:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.611 19:07:54 -- bdev/blockdev.sh@482 -- # sleep 1 00:07:17.611 19:07:54 -- bdev/blockdev.sh@481 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:17.611 Running I/O for 5 seconds... 00:07:18.547 19:07:55 -- bdev/blockdev.sh@485 -- # kill -0 48573 00:07:18.547 Process is existed as continue on error is set. Pid: 48573 00:07:18.547 19:07:55 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 48573' 00:07:18.547 19:07:55 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:07:18.547 19:07:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.547 19:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.547 19:07:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.547 19:07:55 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:07:18.547 19:07:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.547 19:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 19:07:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.806 19:07:55 -- bdev/blockdev.sh@495 -- # sleep 5 00:07:18.806 Timeout while waiting for response: 00:07:18.806 00:07:18.806 00:07:23.001 00:07:23.001 Latency(us) 00:07:23.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.001 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:23.001 EE_Dev_1 : 0.96 388315.08 1516.86 5.21 0.00 41.03 18.90 120.44 00:07:23.001 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:23.001 Dev_2 : 5.00 785818.51 3069.60 0.00 0.00 20.19 5.61 35951.15 00:07:23.001 =================================================================================================================== 00:07:23.001 Total : 1174133.60 4586.46 5.21 0.00 21.99 5.61 35951.15 00:07:23.939 19:08:01 -- bdev/blockdev.sh@497 -- # killprocess 48573 00:07:23.939 19:08:01 -- common/autotest_common.sh@924 -- # '[' -z 48573 ']' 00:07:23.939 19:08:01 -- common/autotest_common.sh@928 -- # kill -0 48573 00:07:23.939 19:08:01 -- common/autotest_common.sh@929 -- # uname 00:07:23.939 19:08:01 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:23.939 19:08:01 -- common/autotest_common.sh@932 -- # ps -c -o command 48573 00:07:23.939 19:08:01 -- common/autotest_common.sh@932 -- # tail -1 00:07:23.939 19:08:01 -- common/autotest_common.sh@932 -- # process_name=bdevperf 00:07:23.939 19:08:01 -- common/autotest_common.sh@934 -- # '[' bdevperf = sudo ']' 00:07:23.939 killing process with pid 48573 00:07:23.939 19:08:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48573' 00:07:23.939 19:08:01 -- common/autotest_common.sh@943 -- # kill 48573 00:07:23.939 Received shutdown signal, test time was about 5.000000 seconds 00:07:23.939 00:07:23.939 Latency(us) 00:07:23.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.939 =================================================================================================================== 00:07:23.939 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:23.939 19:08:01 -- common/autotest_common.sh@948 -- # wait 48573 00:07:24.198 19:08:01 -- bdev/blockdev.sh@501 -- # ERR_PID=48585 00:07:24.198 Process error testing pid: 48585 00:07:24.198 19:08:01 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 48585' 00:07:24.198 19:08:01 -- bdev/blockdev.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:07:24.198 19:08:01 -- bdev/blockdev.sh@503 -- # waitforlisten 48585 00:07:24.198 19:08:01 -- common/autotest_common.sh@817 -- # '[' -z 48585 ']' 00:07:24.198 19:08:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.198 19:08:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:24.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.198 19:08:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.198 19:08:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:24.198 19:08:01 -- common/autotest_common.sh@10 -- # set +x 00:07:24.198 [2024-02-14 19:08:01.600894] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:24.198 [2024-02-14 19:08:01.601466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:25.135 EAL: TSC is not safe to use in SMP mode 00:07:25.136 EAL: TSC is not invariant 00:07:25.136 [2024-02-14 19:08:02.339316] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.136 [2024-02-14 19:08:02.455480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.418 19:08:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.418 19:08:02 -- common/autotest_common.sh@850 -- # return 0 00:07:25.418 19:08:02 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 Dev_1 00:07:25.418 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.418 19:08:02 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:07:25.418 19:08:02 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:07:25.418 19:08:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:25.418 19:08:02 -- common/autotest_common.sh@887 -- # local i 00:07:25.418 19:08:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:25.418 19:08:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:25.418 19:08:02 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.418 19:08:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 [ 00:07:25.418 { 00:07:25.418 "name": "Dev_1", 00:07:25.418 "aliases": [ 00:07:25.418 "6033fcbb-cb6c-11ee-af6b-4feeebbbadda" 00:07:25.418 ], 00:07:25.418 "product_name": "Malloc disk", 00:07:25.418 "block_size": 512, 00:07:25.418 "num_blocks": 262144, 00:07:25.418 "uuid": "6033fcbb-cb6c-11ee-af6b-4feeebbbadda", 00:07:25.418 "assigned_rate_limits": { 00:07:25.418 "rw_ios_per_sec": 0, 00:07:25.418 "rw_mbytes_per_sec": 0, 00:07:25.418 "r_mbytes_per_sec": 0, 00:07:25.418 "w_mbytes_per_sec": 0 00:07:25.418 }, 00:07:25.418 "claimed": false, 00:07:25.418 "zoned": false, 00:07:25.418 "supported_io_types": { 00:07:25.418 "read": true, 00:07:25.418 "write": true, 00:07:25.418 "unmap": true, 00:07:25.418 "write_zeroes": true, 00:07:25.418 "flush": true, 00:07:25.418 "reset": true, 00:07:25.418 "compare": false, 00:07:25.418 "compare_and_write": false, 00:07:25.418 "abort": true, 00:07:25.418 "nvme_admin": false, 00:07:25.418 "nvme_io": false 00:07:25.418 }, 00:07:25.418 "memory_domains": [ 00:07:25.418 { 00:07:25.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.418 "dma_device_type": 2 00:07:25.418 } 00:07:25.418 ], 00:07:25.418 "driver_specific": {} 00:07:25.418 } 00:07:25.418 ] 00:07:25.418 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.418 19:08:02 -- common/autotest_common.sh@893 -- # return 0 00:07:25.418 19:08:02 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 true 00:07:25.418 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.418 19:08:02 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 Dev_2 00:07:25.418 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.418 19:08:02 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:07:25.418 19:08:02 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:07:25.418 19:08:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:25.418 19:08:02 -- common/autotest_common.sh@887 -- # local i 00:07:25.418 19:08:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:25.418 19:08:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:25.418 19:08:02 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.418 19:08:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:25.418 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.418 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.418 [ 00:07:25.418 { 00:07:25.418 "name": "Dev_2", 00:07:25.418 "aliases": [ 00:07:25.418 "60397a46-cb6c-11ee-af6b-4feeebbbadda" 00:07:25.418 ], 00:07:25.418 "product_name": "Malloc disk", 00:07:25.418 "block_size": 512, 00:07:25.418 "num_blocks": 262144, 00:07:25.418 "uuid": "60397a46-cb6c-11ee-af6b-4feeebbbadda", 00:07:25.418 "assigned_rate_limits": { 00:07:25.419 "rw_ios_per_sec": 0, 00:07:25.419 "rw_mbytes_per_sec": 0, 00:07:25.419 "r_mbytes_per_sec": 0, 00:07:25.419 "w_mbytes_per_sec": 0 00:07:25.419 }, 00:07:25.419 "claimed": false, 00:07:25.419 "zoned": false, 00:07:25.419 "supported_io_types": { 00:07:25.419 "read": true, 00:07:25.419 "write": true, 00:07:25.419 "unmap": true, 00:07:25.419 "write_zeroes": true, 00:07:25.419 "flush": true, 00:07:25.419 "reset": true, 00:07:25.419 "compare": false, 00:07:25.419 "compare_and_write": false, 00:07:25.419 "abort": true, 00:07:25.419 "nvme_admin": false, 00:07:25.419 "nvme_io": false 00:07:25.419 }, 00:07:25.419 "memory_domains": [ 00:07:25.419 { 00:07:25.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.419 "dma_device_type": 2 00:07:25.419 } 00:07:25.419 ], 00:07:25.419 "driver_specific": {} 00:07:25.419 } 00:07:25.419 ] 00:07:25.419 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.419 19:08:02 -- common/autotest_common.sh@893 -- # return 0 00:07:25.419 19:08:02 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:25.419 19:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.419 19:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:25.419 19:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.419 19:08:02 -- bdev/blockdev.sh@513 -- # NOT wait 48585 00:07:25.419 19:08:02 -- common/autotest_common.sh@638 -- # local es=0 00:07:25.419 19:08:02 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 48585 00:07:25.419 19:08:02 -- bdev/blockdev.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:25.419 19:08:02 -- common/autotest_common.sh@626 -- # local arg=wait 00:07:25.419 19:08:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:25.419 19:08:02 -- common/autotest_common.sh@630 -- # type -t wait 00:07:25.419 19:08:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:25.419 19:08:02 -- common/autotest_common.sh@641 -- # wait 48585 00:07:25.419 Running I/O for 5 seconds... 00:07:25.419 task offset: 131768 on job bdev=EE_Dev_1 fails 00:07:25.419 00:07:25.419 Latency(us) 00:07:25.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.419 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:25.419 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:07:25.419 EE_Dev_1 : 0.00 165413.53 646.15 37593.98 0.00 63.74 21.21 123.86 00:07:25.419 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:25.419 Dev_2 : 0.00 206451.61 806.45 0.00 0.00 33.74 22.55 44.86 00:07:25.419 =================================================================================================================== 00:07:25.419 Total : 371865.15 1452.60 37593.98 0.00 47.47 21.21 123.86 00:07:25.419 [2024-02-14 19:08:02.801506] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.692 request: 00:07:25.692 { 00:07:25.692 "method": "perform_tests", 00:07:25.692 "req_id": 1 00:07:25.692 } 00:07:25.692 Got JSON-RPC error response 00:07:25.692 response: 00:07:25.692 { 00:07:25.692 "code": -32603, 00:07:25.692 "message": "bdevperf failed with error Operation not permitted" 00:07:25.692 } 00:07:25.951 19:08:03 -- common/autotest_common.sh@641 -- # es=255 00:07:25.951 19:08:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:25.951 19:08:03 -- common/autotest_common.sh@650 -- # es=127 00:07:25.951 19:08:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:25.951 19:08:03 -- common/autotest_common.sh@658 -- # es=1 00:07:25.951 19:08:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:25.951 00:07:25.951 real 0m9.418s 00:07:25.951 user 0m8.968s 00:07:25.951 sys 0m1.826s 00:07:25.951 19:08:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.951 19:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.951 ************************************ 00:07:25.951 END TEST bdev_error 00:07:25.951 ************************************ 00:07:25.951 19:08:03 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:07:25.951 19:08:03 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:07:25.951 19:08:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:25.951 19:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.951 ************************************ 00:07:25.951 START TEST bdev_stat 00:07:25.951 ************************************ 00:07:25.951 19:08:03 -- common/autotest_common.sh@1102 -- # stat_test_suite '' 00:07:25.951 19:08:03 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:07:25.951 19:08:03 -- bdev/blockdev.sh@594 -- # STAT_PID=48608 00:07:25.951 19:08:03 -- bdev/blockdev.sh@593 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:07:25.951 Process Bdev IO statistics testing pid: 48608 00:07:25.951 19:08:03 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 48608' 00:07:25.951 19:08:03 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:07:25.951 19:08:03 -- bdev/blockdev.sh@597 -- # waitforlisten 48608 00:07:25.951 19:08:03 -- common/autotest_common.sh@817 -- # '[' -z 48608 ']' 00:07:25.951 19:08:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.951 19:08:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.951 19:08:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.951 19:08:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.951 19:08:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.951 [2024-02-14 19:08:03.161648] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:25.951 [2024-02-14 19:08:03.161840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:26.519 EAL: TSC is not safe to use in SMP mode 00:07:26.519 EAL: TSC is not invariant 00:07:26.519 [2024-02-14 19:08:03.917377] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.778 [2024-02-14 19:08:04.047815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.778 [2024-02-14 19:08:04.047806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.036 19:08:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:27.036 19:08:04 -- common/autotest_common.sh@850 -- # return 0 00:07:27.036 19:08:04 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:07:27.036 19:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.036 19:08:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.036 Malloc_STAT 00:07:27.036 19:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.036 19:08:04 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:07:27.036 19:08:04 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:07:27.036 19:08:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:27.036 19:08:04 -- common/autotest_common.sh@887 -- # local i 00:07:27.036 19:08:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:27.036 19:08:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:27.036 19:08:04 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:07:27.036 19:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.037 19:08:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.037 19:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.037 19:08:04 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:07:27.037 19:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.037 19:08:04 -- common/autotest_common.sh@10 -- # set +x 00:07:27.037 [ 00:07:27.037 { 00:07:27.037 "name": "Malloc_STAT", 00:07:27.037 "aliases": [ 00:07:27.037 "612b7cff-cb6c-11ee-af6b-4feeebbbadda" 00:07:27.037 ], 00:07:27.037 "product_name": "Malloc disk", 00:07:27.037 "block_size": 512, 00:07:27.037 "num_blocks": 262144, 00:07:27.037 "uuid": "612b7cff-cb6c-11ee-af6b-4feeebbbadda", 00:07:27.037 "assigned_rate_limits": { 00:07:27.037 "rw_ios_per_sec": 0, 00:07:27.037 "rw_mbytes_per_sec": 0, 00:07:27.037 "r_mbytes_per_sec": 0, 00:07:27.037 "w_mbytes_per_sec": 0 00:07:27.037 }, 00:07:27.037 "claimed": false, 00:07:27.037 "zoned": false, 00:07:27.037 "supported_io_types": { 00:07:27.037 "read": true, 00:07:27.037 "write": true, 00:07:27.037 "unmap": true, 00:07:27.037 "write_zeroes": true, 00:07:27.037 "flush": true, 00:07:27.037 "reset": true, 00:07:27.037 "compare": false, 00:07:27.037 "compare_and_write": false, 00:07:27.037 "abort": true, 00:07:27.037 "nvme_admin": false, 00:07:27.037 "nvme_io": false 00:07:27.037 }, 00:07:27.037 "memory_domains": [ 00:07:27.037 { 00:07:27.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.037 "dma_device_type": 2 00:07:27.037 } 00:07:27.037 ], 00:07:27.037 "driver_specific": {} 00:07:27.037 } 00:07:27.037 ] 00:07:27.037 19:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.037 19:08:04 -- common/autotest_common.sh@893 -- # return 0 00:07:27.037 19:08:04 -- bdev/blockdev.sh@603 -- # sleep 2 00:07:27.037 19:08:04 -- bdev/blockdev.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:27.037 Running I/O for 10 seconds... 00:07:28.940 19:08:06 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:07:28.940 19:08:06 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:07:28.940 19:08:06 -- bdev/blockdev.sh@558 -- # local iostats 00:07:28.940 19:08:06 -- bdev/blockdev.sh@559 -- # local io_count1 00:07:28.940 19:08:06 -- bdev/blockdev.sh@560 -- # local io_count2 00:07:28.940 19:08:06 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:07:28.940 19:08:06 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:07:28.940 19:08:06 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:07:28.940 19:08:06 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:07:28.940 19:08:06 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:28.940 19:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.940 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:28.940 19:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:28.940 19:08:06 -- bdev/blockdev.sh@566 -- # iostats='{ 00:07:28.940 "tick_rate": 2100001353, 00:07:28.940 "ticks": 777171705380, 00:07:28.940 "bdevs": [ 00:07:28.940 { 00:07:28.940 "name": "Malloc_STAT", 00:07:28.940 "bytes_read": 12758061568, 00:07:28.940 "num_read_ops": 3114755, 00:07:28.940 "bytes_written": 0, 00:07:28.940 "num_write_ops": 0, 00:07:28.940 "bytes_unmapped": 0, 00:07:28.940 "num_unmap_ops": 0, 00:07:28.940 "bytes_copied": 0, 00:07:28.940 "num_copy_ops": 0, 00:07:28.940 "read_latency_ticks": 2095454702964, 00:07:28.940 "max_read_latency_ticks": 1332998, 00:07:28.940 "min_read_latency_ticks": 39352, 00:07:28.940 "write_latency_ticks": 0, 00:07:28.940 "max_write_latency_ticks": 0, 00:07:28.940 "min_write_latency_ticks": 0, 00:07:28.940 "unmap_latency_ticks": 0, 00:07:28.940 "max_unmap_latency_ticks": 0, 00:07:28.940 "min_unmap_latency_ticks": 0, 00:07:28.940 "copy_latency_ticks": 0, 00:07:28.940 "max_copy_latency_ticks": 0, 00:07:28.940 "min_copy_latency_ticks": 0, 00:07:28.940 "io_error": {} 00:07:28.940 } 00:07:28.940 ] 00:07:28.940 }' 00:07:28.940 19:08:06 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:07:28.940 19:08:06 -- bdev/blockdev.sh@567 -- # io_count1=3114755 00:07:28.940 19:08:06 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:07:28.940 19:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:28.940 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.198 19:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.198 19:08:06 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:07:29.198 "tick_rate": 2100001353, 00:07:29.198 "ticks": 777232085624, 00:07:29.198 "name": "Malloc_STAT", 00:07:29.198 "channels": [ 00:07:29.198 { 00:07:29.198 "thread_id": 2, 00:07:29.198 "bytes_read": 6490685440, 00:07:29.198 "num_read_ops": 1584640, 00:07:29.198 "bytes_written": 0, 00:07:29.198 "num_write_ops": 0, 00:07:29.198 "bytes_unmapped": 0, 00:07:29.198 "num_unmap_ops": 0, 00:07:29.198 "bytes_copied": 0, 00:07:29.198 "num_copy_ops": 0, 00:07:29.198 "read_latency_ticks": 1063081825192, 00:07:29.198 "max_read_latency_ticks": 1332998, 00:07:29.198 "min_read_latency_ticks": 591092, 00:07:29.198 "write_latency_ticks": 0, 00:07:29.198 "max_write_latency_ticks": 0, 00:07:29.198 "min_write_latency_ticks": 0, 00:07:29.198 "unmap_latency_ticks": 0, 00:07:29.198 "max_unmap_latency_ticks": 0, 00:07:29.198 "min_unmap_latency_ticks": 0, 00:07:29.198 "copy_latency_ticks": 0, 00:07:29.198 "max_copy_latency_ticks": 0, 00:07:29.198 "min_copy_latency_ticks": 0 00:07:29.198 }, 00:07:29.198 { 00:07:29.198 "thread_id": 3, 00:07:29.198 "bytes_read": 6447693824, 00:07:29.198 "num_read_ops": 1574144, 00:07:29.198 "bytes_written": 0, 00:07:29.198 "num_write_ops": 0, 00:07:29.198 "bytes_unmapped": 0, 00:07:29.198 "num_unmap_ops": 0, 00:07:29.198 "bytes_copied": 0, 00:07:29.198 "num_copy_ops": 0, 00:07:29.198 "read_latency_ticks": 1063134603780, 00:07:29.198 "max_read_latency_ticks": 1217706, 00:07:29.198 "min_read_latency_ticks": 594230, 00:07:29.198 "write_latency_ticks": 0, 00:07:29.198 "max_write_latency_ticks": 0, 00:07:29.198 "min_write_latency_ticks": 0, 00:07:29.198 "unmap_latency_ticks": 0, 00:07:29.198 "max_unmap_latency_ticks": 0, 00:07:29.198 "min_unmap_latency_ticks": 0, 00:07:29.198 "copy_latency_ticks": 0, 00:07:29.198 "max_copy_latency_ticks": 0, 00:07:29.198 "min_copy_latency_ticks": 0 00:07:29.198 } 00:07:29.198 ] 00:07:29.198 }' 00:07:29.198 19:08:06 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:07:29.198 19:08:06 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=1584640 00:07:29.198 19:08:06 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=1584640 00:07:29.198 19:08:06 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:07:29.198 19:08:06 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=1574144 00:07:29.198 19:08:06 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=3158784 00:07:29.198 19:08:06 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:29.198 19:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.198 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.198 19:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.198 19:08:06 -- bdev/blockdev.sh@575 -- # iostats='{ 00:07:29.198 "tick_rate": 2100001353, 00:07:29.198 "ticks": 777310021404, 00:07:29.198 "bdevs": [ 00:07:29.198 { 00:07:29.198 "name": "Malloc_STAT", 00:07:29.198 "bytes_read": 13167006208, 00:07:29.198 "num_read_ops": 3214595, 00:07:29.198 "bytes_written": 0, 00:07:29.198 "num_write_ops": 0, 00:07:29.198 "bytes_unmapped": 0, 00:07:29.198 "num_unmap_ops": 0, 00:07:29.198 "bytes_copied": 0, 00:07:29.198 "num_copy_ops": 0, 00:07:29.198 "read_latency_ticks": 2166129737016, 00:07:29.198 "max_read_latency_ticks": 1332998, 00:07:29.198 "min_read_latency_ticks": 39352, 00:07:29.198 "write_latency_ticks": 0, 00:07:29.198 "max_write_latency_ticks": 0, 00:07:29.198 "min_write_latency_ticks": 0, 00:07:29.198 "unmap_latency_ticks": 0, 00:07:29.198 "max_unmap_latency_ticks": 0, 00:07:29.198 "min_unmap_latency_ticks": 0, 00:07:29.198 "copy_latency_ticks": 0, 00:07:29.198 "max_copy_latency_ticks": 0, 00:07:29.198 "min_copy_latency_ticks": 0, 00:07:29.198 "io_error": {} 00:07:29.198 } 00:07:29.198 ] 00:07:29.198 }' 00:07:29.199 19:08:06 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:07:29.199 19:08:06 -- bdev/blockdev.sh@576 -- # io_count2=3214595 00:07:29.199 19:08:06 -- bdev/blockdev.sh@581 -- # '[' 3158784 -lt 3114755 ']' 00:07:29.199 19:08:06 -- bdev/blockdev.sh@581 -- # '[' 3158784 -gt 3214595 ']' 00:07:29.199 19:08:06 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:07:29.199 19:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.199 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.199 00:07:29.199 Latency(us) 00:07:29.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.199 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:29.199 Malloc_STAT : 2.05 800177.52 3125.69 0.00 0.00 319.69 46.81 635.85 00:07:29.199 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:29.199 Malloc_STAT : 2.05 794194.14 3102.32 0.00 0.00 322.09 56.32 581.24 00:07:29.199 =================================================================================================================== 00:07:29.199 Total : 1594371.67 6228.01 0.00 0.00 320.89 46.81 635.85 00:07:29.199 0 00:07:29.199 19:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.199 19:08:06 -- bdev/blockdev.sh@607 -- # killprocess 48608 00:07:29.199 19:08:06 -- common/autotest_common.sh@924 -- # '[' -z 48608 ']' 00:07:29.199 19:08:06 -- common/autotest_common.sh@928 -- # kill -0 48608 00:07:29.199 19:08:06 -- common/autotest_common.sh@929 -- # uname 00:07:29.199 19:08:06 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:29.199 19:08:06 -- common/autotest_common.sh@932 -- # ps -c -o command 48608 00:07:29.199 19:08:06 -- common/autotest_common.sh@932 -- # tail -1 00:07:29.199 19:08:06 -- common/autotest_common.sh@932 -- # process_name=bdevperf 00:07:29.199 killing process with pid 48608 00:07:29.199 19:08:06 -- common/autotest_common.sh@934 -- # '[' bdevperf = sudo ']' 00:07:29.199 19:08:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48608' 00:07:29.199 Received shutdown signal, test time was about 2.097875 seconds 00:07:29.199 00:07:29.199 Latency(us) 00:07:29.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.199 =================================================================================================================== 00:07:29.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:29.199 19:08:06 -- common/autotest_common.sh@943 -- # kill 48608 00:07:29.199 19:08:06 -- common/autotest_common.sh@948 -- # wait 48608 00:07:29.458 19:08:06 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:07:29.458 00:07:29.458 real 0m3.551s 00:07:29.458 user 0m5.848s 00:07:29.458 sys 0m0.983s 00:07:29.458 19:08:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.458 ************************************ 00:07:29.458 END TEST bdev_stat 00:07:29.458 ************************************ 00:07:29.458 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.458 19:08:06 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:07:29.458 19:08:06 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:07:29.458 19:08:06 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:07:29.458 19:08:06 -- bdev/blockdev.sh@809 -- # cleanup 00:07:29.458 19:08:06 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.458 19:08:06 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.458 19:08:06 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:07:29.458 19:08:06 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:07:29.458 19:08:06 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:07:29.458 19:08:06 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:07:29.458 00:07:29.458 real 1m36.307s 00:07:29.458 user 4m29.448s 00:07:29.458 sys 0m32.034s 00:07:29.458 19:08:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.458 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.458 ************************************ 00:07:29.458 END TEST blockdev_general 00:07:29.458 ************************************ 00:07:29.458 19:08:06 -- spdk/autotest.sh@196 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:29.458 19:08:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:29.458 19:08:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:29.458 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.458 ************************************ 00:07:29.458 START TEST bdev_raid 00:07:29.458 ************************************ 00:07:29.458 19:08:06 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:29.717 * Looking for test storage... 00:07:29.717 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:29.717 19:08:06 -- bdev/nbd_common.sh@6 -- # set -e 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@716 -- # uname -s 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@716 -- # '[' FreeBSD = Linux ']' 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:07:29.717 19:08:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:29.717 19:08:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:29.717 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.717 ************************************ 00:07:29.717 START TEST raid0_resize_test 00:07:29.717 ************************************ 00:07:29.717 19:08:06 -- common/autotest_common.sh@1102 -- # raid0_resize_test 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:07:29.717 19:08:06 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:07:29.718 19:08:06 -- bdev/bdev_raid.sh@301 -- # raid_pid=48695 00:07:29.718 Process raid pid: 48695 00:07:29.718 19:08:06 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 48695' 00:07:29.718 19:08:06 -- bdev/bdev_raid.sh@303 -- # waitforlisten 48695 /var/tmp/spdk-raid.sock 00:07:29.718 19:08:06 -- common/autotest_common.sh@817 -- # '[' -z 48695 ']' 00:07:29.718 19:08:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:29.718 19:08:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:29.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:29.718 19:08:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:29.718 19:08:06 -- bdev/bdev_raid.sh@300 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:29.718 19:08:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:29.718 19:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:29.718 [2024-02-14 19:08:06.984999] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:29.718 [2024-02-14 19:08:06.985344] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:30.656 EAL: TSC is not safe to use in SMP mode 00:07:30.656 EAL: TSC is not invariant 00:07:30.656 [2024-02-14 19:08:07.747845] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.656 [2024-02-14 19:08:07.856633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.656 [2024-02-14 19:08:07.857093] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.656 [2024-02-14 19:08:07.857103] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.656 19:08:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:30.656 19:08:07 -- common/autotest_common.sh@850 -- # return 0 00:07:30.656 19:08:07 -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:07:30.915 Base_1 00:07:30.915 19:08:08 -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:07:31.172 Base_2 00:07:31.172 19:08:08 -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:07:31.429 [2024-02-14 19:08:08.659886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:31.429 [2024-02-14 19:08:08.660585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:31.429 [2024-02-14 19:08:08.660620] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a0a3a00 00:07:31.429 [2024-02-14 19:08:08.660625] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:31.429 [2024-02-14 19:08:08.660663] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a106e20 00:07:31.429 [2024-02-14 19:08:08.660736] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a0a3a00 00:07:31.429 [2024-02-14 19:08:08.660743] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82a0a3a00 00:07:31.429 [2024-02-14 19:08:08.660790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.429 19:08:08 -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:31.687 [2024-02-14 19:08:08.991906] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.687 [2024-02-14 19:08:08.991944] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:31.687 true 00:07:31.687 19:08:09 -- bdev/bdev_raid.sh@314 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:31.687 19:08:09 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:07:31.946 [2024-02-14 19:08:09.227918] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.946 19:08:09 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:07:31.946 19:08:09 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:07:31.946 19:08:09 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:07:31.946 19:08:09 -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:32.206 [2024-02-14 19:08:09.439887] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.206 [2024-02-14 19:08:09.439911] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:32.206 [2024-02-14 19:08:09.439961] raid0.c: 405:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:07:32.206 [2024-02-14 19:08:09.439972] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:32.206 true 00:07:32.206 19:08:09 -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:32.206 19:08:09 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:07:32.206 [2024-02-14 19:08:09.619905] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.465 19:08:09 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:07:32.465 19:08:09 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:07:32.465 19:08:09 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:07:32.465 19:08:09 -- bdev/bdev_raid.sh@332 -- # killprocess 48695 00:07:32.465 19:08:09 -- common/autotest_common.sh@924 -- # '[' -z 48695 ']' 00:07:32.465 19:08:09 -- common/autotest_common.sh@928 -- # kill -0 48695 00:07:32.465 19:08:09 -- common/autotest_common.sh@929 -- # uname 00:07:32.465 19:08:09 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:32.465 19:08:09 -- common/autotest_common.sh@932 -- # ps -c -o command 48695 00:07:32.465 19:08:09 -- common/autotest_common.sh@932 -- # tail -1 00:07:32.465 19:08:09 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:07:32.465 killing process with pid 48695 00:07:32.465 19:08:09 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:07:32.465 19:08:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48695' 00:07:32.465 19:08:09 -- common/autotest_common.sh@943 -- # kill 48695 00:07:32.465 19:08:09 -- common/autotest_common.sh@948 -- # wait 48695 00:07:32.465 [2024-02-14 19:08:09.649199] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.466 [2024-02-14 19:08:09.649225] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.466 [2024-02-14 19:08:09.649249] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.466 [2024-02-14 19:08:09.649253] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a0a3a00 name Raid, state offline 00:07:32.466 [2024-02-14 19:08:09.649438] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.466 19:08:09 -- bdev/bdev_raid.sh@334 -- # return 0 00:07:32.466 00:07:32.466 real 0m2.907s 00:07:32.466 user 0m3.913s 00:07:32.466 sys 0m1.098s 00:07:32.466 19:08:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.466 ************************************ 00:07:32.466 END TEST raid0_resize_test 00:07:32.466 ************************************ 00:07:32.466 19:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.725 19:08:09 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:07:32.725 19:08:09 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:32.725 19:08:09 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:32.725 19:08:09 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:07:32.725 19:08:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:32.725 19:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.725 ************************************ 00:07:32.725 START TEST raid_state_function_test 00:07:32.725 ************************************ 00:07:32.725 19:08:09 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 2 false 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=48733 00:07:32.726 Process raid pid: 48733 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48733' 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:32.726 19:08:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48733 /var/tmp/spdk-raid.sock 00:07:32.726 19:08:09 -- common/autotest_common.sh@817 -- # '[' -z 48733 ']' 00:07:32.726 19:08:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:32.726 19:08:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:32.726 19:08:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:32.726 19:08:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.726 19:08:09 -- common/autotest_common.sh@10 -- # set +x 00:07:32.726 [2024-02-14 19:08:09.931536] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:32.726 [2024-02-14 19:08:09.931800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:33.296 EAL: TSC is not safe to use in SMP mode 00:07:33.296 EAL: TSC is not invariant 00:07:33.296 [2024-02-14 19:08:10.655006] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.555 [2024-02-14 19:08:10.764848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.555 [2024-02-14 19:08:10.765316] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.555 [2024-02-14 19:08:10.765321] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.555 19:08:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:33.555 19:08:10 -- common/autotest_common.sh@850 -- # return 0 00:07:33.555 19:08:10 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:33.815 [2024-02-14 19:08:10.992166] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:33.815 [2024-02-14 19:08:10.992223] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:33.815 [2024-02-14 19:08:10.992227] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:33.815 [2024-02-14 19:08:10.992234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:33.815 "name": "Existed_Raid", 00:07:33.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.815 "strip_size_kb": 64, 00:07:33.815 "state": "configuring", 00:07:33.815 "raid_level": "raid0", 00:07:33.815 "superblock": false, 00:07:33.815 "num_base_bdevs": 2, 00:07:33.815 "num_base_bdevs_discovered": 0, 00:07:33.815 "num_base_bdevs_operational": 2, 00:07:33.815 "base_bdevs_list": [ 00:07:33.815 { 00:07:33.815 "name": "BaseBdev1", 00:07:33.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.815 "is_configured": false, 00:07:33.815 "data_offset": 0, 00:07:33.815 "data_size": 0 00:07:33.815 }, 00:07:33.815 { 00:07:33.815 "name": "BaseBdev2", 00:07:33.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.815 "is_configured": false, 00:07:33.815 "data_offset": 0, 00:07:33.815 "data_size": 0 00:07:33.815 } 00:07:33.815 ] 00:07:33.815 }' 00:07:33.815 19:08:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:33.815 19:08:11 -- common/autotest_common.sh@10 -- # set +x 00:07:34.074 19:08:11 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:34.333 [2024-02-14 19:08:11.592204] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.333 [2024-02-14 19:08:11.592235] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5dd500 name Existed_Raid, state configuring 00:07:34.334 19:08:11 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:34.595 [2024-02-14 19:08:11.836211] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.595 [2024-02-14 19:08:11.836276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.595 [2024-02-14 19:08:11.836280] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.595 [2024-02-14 19:08:11.836287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.595 19:08:11 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:34.855 [2024-02-14 19:08:12.017376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.855 BaseBdev1 00:07:34.855 19:08:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:34.855 19:08:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:07:34.855 19:08:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:34.855 19:08:12 -- common/autotest_common.sh@887 -- # local i 00:07:34.855 19:08:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:34.855 19:08:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:34.855 19:08:12 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:34.855 19:08:12 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:35.114 [ 00:07:35.114 { 00:07:35.114 "name": "BaseBdev1", 00:07:35.114 "aliases": [ 00:07:35.114 "65d0efce-cb6c-11ee-af6b-4feeebbbadda" 00:07:35.114 ], 00:07:35.114 "product_name": "Malloc disk", 00:07:35.114 "block_size": 512, 00:07:35.114 "num_blocks": 65536, 00:07:35.114 "uuid": "65d0efce-cb6c-11ee-af6b-4feeebbbadda", 00:07:35.114 "assigned_rate_limits": { 00:07:35.114 "rw_ios_per_sec": 0, 00:07:35.114 "rw_mbytes_per_sec": 0, 00:07:35.114 "r_mbytes_per_sec": 0, 00:07:35.114 "w_mbytes_per_sec": 0 00:07:35.114 }, 00:07:35.114 "claimed": true, 00:07:35.114 "claim_type": "exclusive_write", 00:07:35.114 "zoned": false, 00:07:35.114 "supported_io_types": { 00:07:35.114 "read": true, 00:07:35.114 "write": true, 00:07:35.114 "unmap": true, 00:07:35.114 "write_zeroes": true, 00:07:35.114 "flush": true, 00:07:35.114 "reset": true, 00:07:35.114 "compare": false, 00:07:35.114 "compare_and_write": false, 00:07:35.114 "abort": true, 00:07:35.114 "nvme_admin": false, 00:07:35.114 "nvme_io": false 00:07:35.114 }, 00:07:35.114 "memory_domains": [ 00:07:35.114 { 00:07:35.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.114 "dma_device_type": 2 00:07:35.114 } 00:07:35.114 ], 00:07:35.114 "driver_specific": {} 00:07:35.114 } 00:07:35.114 ] 00:07:35.114 19:08:12 -- common/autotest_common.sh@893 -- # return 0 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.114 19:08:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.373 19:08:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:35.373 "name": "Existed_Raid", 00:07:35.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.373 "strip_size_kb": 64, 00:07:35.373 "state": "configuring", 00:07:35.373 "raid_level": "raid0", 00:07:35.373 "superblock": false, 00:07:35.373 "num_base_bdevs": 2, 00:07:35.373 "num_base_bdevs_discovered": 1, 00:07:35.373 "num_base_bdevs_operational": 2, 00:07:35.373 "base_bdevs_list": [ 00:07:35.373 { 00:07:35.373 "name": "BaseBdev1", 00:07:35.373 "uuid": "65d0efce-cb6c-11ee-af6b-4feeebbbadda", 00:07:35.373 "is_configured": true, 00:07:35.373 "data_offset": 0, 00:07:35.373 "data_size": 65536 00:07:35.373 }, 00:07:35.373 { 00:07:35.373 "name": "BaseBdev2", 00:07:35.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.373 "is_configured": false, 00:07:35.373 "data_offset": 0, 00:07:35.373 "data_size": 0 00:07:35.373 } 00:07:35.373 ] 00:07:35.373 }' 00:07:35.373 19:08:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:35.373 19:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:35.632 19:08:12 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:35.891 [2024-02-14 19:08:13.152222] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.891 [2024-02-14 19:08:13.152255] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5dd500 name Existed_Raid, state configuring 00:07:35.891 19:08:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:35.891 19:08:13 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:36.150 [2024-02-14 19:08:13.392235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.150 [2024-02-14 19:08:13.393225] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.150 [2024-02-14 19:08:13.393268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.150 19:08:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.408 19:08:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:36.408 "name": "Existed_Raid", 00:07:36.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.408 "strip_size_kb": 64, 00:07:36.408 "state": "configuring", 00:07:36.408 "raid_level": "raid0", 00:07:36.408 "superblock": false, 00:07:36.408 "num_base_bdevs": 2, 00:07:36.408 "num_base_bdevs_discovered": 1, 00:07:36.408 "num_base_bdevs_operational": 2, 00:07:36.408 "base_bdevs_list": [ 00:07:36.408 { 00:07:36.408 "name": "BaseBdev1", 00:07:36.408 "uuid": "65d0efce-cb6c-11ee-af6b-4feeebbbadda", 00:07:36.408 "is_configured": true, 00:07:36.408 "data_offset": 0, 00:07:36.408 "data_size": 65536 00:07:36.408 }, 00:07:36.408 { 00:07:36.408 "name": "BaseBdev2", 00:07:36.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.408 "is_configured": false, 00:07:36.408 "data_offset": 0, 00:07:36.408 "data_size": 0 00:07:36.408 } 00:07:36.408 ] 00:07:36.408 }' 00:07:36.408 19:08:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:36.408 19:08:13 -- common/autotest_common.sh@10 -- # set +x 00:07:36.667 19:08:13 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:36.926 [2024-02-14 19:08:14.124380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:36.926 [2024-02-14 19:08:14.124405] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5dda00 00:07:36.926 [2024-02-14 19:08:14.124408] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:36.926 [2024-02-14 19:08:14.124426] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b640ec0 00:07:36.926 [2024-02-14 19:08:14.124527] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5dda00 00:07:36.926 [2024-02-14 19:08:14.124530] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5dda00 00:07:36.926 [2024-02-14 19:08:14.124558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.926 BaseBdev2 00:07:36.926 19:08:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:36.926 19:08:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:07:36.926 19:08:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:36.926 19:08:14 -- common/autotest_common.sh@887 -- # local i 00:07:36.926 19:08:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:36.926 19:08:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:36.926 19:08:14 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:37.184 19:08:14 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.184 [ 00:07:37.184 { 00:07:37.184 "name": "BaseBdev2", 00:07:37.184 "aliases": [ 00:07:37.184 "6712987a-cb6c-11ee-af6b-4feeebbbadda" 00:07:37.184 ], 00:07:37.184 "product_name": "Malloc disk", 00:07:37.184 "block_size": 512, 00:07:37.184 "num_blocks": 65536, 00:07:37.184 "uuid": "6712987a-cb6c-11ee-af6b-4feeebbbadda", 00:07:37.184 "assigned_rate_limits": { 00:07:37.184 "rw_ios_per_sec": 0, 00:07:37.185 "rw_mbytes_per_sec": 0, 00:07:37.185 "r_mbytes_per_sec": 0, 00:07:37.185 "w_mbytes_per_sec": 0 00:07:37.185 }, 00:07:37.185 "claimed": true, 00:07:37.185 "claim_type": "exclusive_write", 00:07:37.185 "zoned": false, 00:07:37.185 "supported_io_types": { 00:07:37.185 "read": true, 00:07:37.185 "write": true, 00:07:37.185 "unmap": true, 00:07:37.185 "write_zeroes": true, 00:07:37.185 "flush": true, 00:07:37.185 "reset": true, 00:07:37.185 "compare": false, 00:07:37.185 "compare_and_write": false, 00:07:37.185 "abort": true, 00:07:37.185 "nvme_admin": false, 00:07:37.185 "nvme_io": false 00:07:37.185 }, 00:07:37.185 "memory_domains": [ 00:07:37.185 { 00:07:37.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.185 "dma_device_type": 2 00:07:37.185 } 00:07:37.185 ], 00:07:37.185 "driver_specific": {} 00:07:37.185 } 00:07:37.185 ] 00:07:37.185 19:08:14 -- common/autotest_common.sh@893 -- # return 0 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.185 19:08:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.443 19:08:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:37.443 "name": "Existed_Raid", 00:07:37.444 "uuid": "67129ee5-cb6c-11ee-af6b-4feeebbbadda", 00:07:37.444 "strip_size_kb": 64, 00:07:37.444 "state": "online", 00:07:37.444 "raid_level": "raid0", 00:07:37.444 "superblock": false, 00:07:37.444 "num_base_bdevs": 2, 00:07:37.444 "num_base_bdevs_discovered": 2, 00:07:37.444 "num_base_bdevs_operational": 2, 00:07:37.444 "base_bdevs_list": [ 00:07:37.444 { 00:07:37.444 "name": "BaseBdev1", 00:07:37.444 "uuid": "65d0efce-cb6c-11ee-af6b-4feeebbbadda", 00:07:37.444 "is_configured": true, 00:07:37.444 "data_offset": 0, 00:07:37.444 "data_size": 65536 00:07:37.444 }, 00:07:37.444 { 00:07:37.444 "name": "BaseBdev2", 00:07:37.444 "uuid": "6712987a-cb6c-11ee-af6b-4feeebbbadda", 00:07:37.444 "is_configured": true, 00:07:37.444 "data_offset": 0, 00:07:37.444 "data_size": 65536 00:07:37.444 } 00:07:37.444 ] 00:07:37.444 }' 00:07:37.444 19:08:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:37.444 19:08:14 -- common/autotest_common.sh@10 -- # set +x 00:07:37.702 19:08:15 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:37.961 [2024-02-14 19:08:15.360271] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:37.961 [2024-02-14 19:08:15.360292] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.961 [2024-02-14 19:08:15.360305] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.219 19:08:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:38.219 19:08:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:38.220 "name": "Existed_Raid", 00:07:38.220 "uuid": "67129ee5-cb6c-11ee-af6b-4feeebbbadda", 00:07:38.220 "strip_size_kb": 64, 00:07:38.220 "state": "offline", 00:07:38.220 "raid_level": "raid0", 00:07:38.220 "superblock": false, 00:07:38.220 "num_base_bdevs": 2, 00:07:38.220 "num_base_bdevs_discovered": 1, 00:07:38.220 "num_base_bdevs_operational": 1, 00:07:38.220 "base_bdevs_list": [ 00:07:38.220 { 00:07:38.220 "name": null, 00:07:38.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.220 "is_configured": false, 00:07:38.220 "data_offset": 0, 00:07:38.220 "data_size": 65536 00:07:38.220 }, 00:07:38.220 { 00:07:38.220 "name": "BaseBdev2", 00:07:38.220 "uuid": "6712987a-cb6c-11ee-af6b-4feeebbbadda", 00:07:38.220 "is_configured": true, 00:07:38.220 "data_offset": 0, 00:07:38.220 "data_size": 65536 00:07:38.220 } 00:07:38.220 ] 00:07:38.220 }' 00:07:38.220 19:08:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:38.220 19:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:38.786 19:08:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:38.786 19:08:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:38.786 19:08:15 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.786 19:08:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:38.786 19:08:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:38.786 19:08:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.786 19:08:16 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:39.045 [2024-02-14 19:08:16.397189] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.045 [2024-02-14 19:08:16.397209] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5dda00 name Existed_Raid, state offline 00:07:39.045 19:08:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:39.045 19:08:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:39.045 19:08:16 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.045 19:08:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.304 19:08:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:39.304 19:08:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:39.304 19:08:16 -- bdev/bdev_raid.sh@287 -- # killprocess 48733 00:07:39.304 19:08:16 -- common/autotest_common.sh@924 -- # '[' -z 48733 ']' 00:07:39.304 19:08:16 -- common/autotest_common.sh@928 -- # kill -0 48733 00:07:39.304 19:08:16 -- common/autotest_common.sh@929 -- # uname 00:07:39.304 19:08:16 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:39.304 19:08:16 -- common/autotest_common.sh@932 -- # ps -c -o command 48733 00:07:39.304 19:08:16 -- common/autotest_common.sh@932 -- # tail -1 00:07:39.304 19:08:16 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:07:39.304 19:08:16 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:07:39.304 killing process with pid 48733 00:07:39.304 19:08:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48733' 00:07:39.304 19:08:16 -- common/autotest_common.sh@943 -- # kill 48733 00:07:39.304 [2024-02-14 19:08:16.648940] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.304 19:08:16 -- common/autotest_common.sh@948 -- # wait 48733 00:07:39.304 [2024-02-14 19:08:16.648987] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:39.563 00:07:39.563 real 0m6.958s 00:07:39.563 user 0m11.398s 00:07:39.563 sys 0m1.724s 00:07:39.563 19:08:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.563 19:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 END TEST raid_state_function_test 00:07:39.563 ************************************ 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:39.563 19:08:16 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:07:39.563 19:08:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:39.563 19:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 ************************************ 00:07:39.563 START TEST raid_state_function_test_sb 00:07:39.563 ************************************ 00:07:39.563 19:08:16 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 2 true 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=48929 00:07:39.563 Process raid pid: 48929 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48929' 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48929 /var/tmp/spdk-raid.sock 00:07:39.563 19:08:16 -- common/autotest_common.sh@817 -- # '[' -z 48929 ']' 00:07:39.563 19:08:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:39.563 19:08:16 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:39.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:39.563 19:08:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:39.563 19:08:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:39.563 19:08:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:39.563 19:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.563 [2024-02-14 19:08:16.939678] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:39.563 [2024-02-14 19:08:16.939997] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:40.501 EAL: TSC is not safe to use in SMP mode 00:07:40.501 EAL: TSC is not invariant 00:07:40.501 [2024-02-14 19:08:17.708355] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.501 [2024-02-14 19:08:17.826681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.501 [2024-02-14 19:08:17.827198] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.501 [2024-02-14 19:08:17.827203] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.760 19:08:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:40.760 19:08:17 -- common/autotest_common.sh@850 -- # return 0 00:07:40.760 19:08:17 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:41.018 [2024-02-14 19:08:18.230621] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.018 [2024-02-14 19:08:18.230696] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.018 [2024-02-14 19:08:18.230702] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.018 [2024-02-14 19:08:18.230710] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.018 19:08:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.287 19:08:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:41.287 "name": "Existed_Raid", 00:07:41.287 "uuid": "69852d4b-cb6c-11ee-af6b-4feeebbbadda", 00:07:41.287 "strip_size_kb": 64, 00:07:41.287 "state": "configuring", 00:07:41.287 "raid_level": "raid0", 00:07:41.287 "superblock": true, 00:07:41.287 "num_base_bdevs": 2, 00:07:41.287 "num_base_bdevs_discovered": 0, 00:07:41.287 "num_base_bdevs_operational": 2, 00:07:41.287 "base_bdevs_list": [ 00:07:41.287 { 00:07:41.287 "name": "BaseBdev1", 00:07:41.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.287 "is_configured": false, 00:07:41.287 "data_offset": 0, 00:07:41.287 "data_size": 0 00:07:41.287 }, 00:07:41.287 { 00:07:41.287 "name": "BaseBdev2", 00:07:41.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.288 "is_configured": false, 00:07:41.288 "data_offset": 0, 00:07:41.288 "data_size": 0 00:07:41.288 } 00:07:41.288 ] 00:07:41.288 }' 00:07:41.288 19:08:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:41.288 19:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:41.550 19:08:18 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:41.809 [2024-02-14 19:08:19.106657] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.809 [2024-02-14 19:08:19.106691] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cba0500 name Existed_Raid, state configuring 00:07:41.809 19:08:19 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:42.067 [2024-02-14 19:08:19.386683] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.067 [2024-02-14 19:08:19.386734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.067 [2024-02-14 19:08:19.386738] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.067 [2024-02-14 19:08:19.386747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.067 19:08:19 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:42.326 [2024-02-14 19:08:19.663871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.326 BaseBdev1 00:07:42.326 19:08:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:42.326 19:08:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:07:42.326 19:08:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:42.326 19:08:19 -- common/autotest_common.sh@887 -- # local i 00:07:42.326 19:08:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:42.326 19:08:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:42.326 19:08:19 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:42.584 19:08:19 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:42.843 [ 00:07:42.843 { 00:07:42.843 "name": "BaseBdev1", 00:07:42.843 "aliases": [ 00:07:42.843 "6a5fb205-cb6c-11ee-af6b-4feeebbbadda" 00:07:42.843 ], 00:07:42.843 "product_name": "Malloc disk", 00:07:42.843 "block_size": 512, 00:07:42.843 "num_blocks": 65536, 00:07:42.843 "uuid": "6a5fb205-cb6c-11ee-af6b-4feeebbbadda", 00:07:42.843 "assigned_rate_limits": { 00:07:42.843 "rw_ios_per_sec": 0, 00:07:42.843 "rw_mbytes_per_sec": 0, 00:07:42.843 "r_mbytes_per_sec": 0, 00:07:42.843 "w_mbytes_per_sec": 0 00:07:42.843 }, 00:07:42.843 "claimed": true, 00:07:42.843 "claim_type": "exclusive_write", 00:07:42.843 "zoned": false, 00:07:42.843 "supported_io_types": { 00:07:42.843 "read": true, 00:07:42.843 "write": true, 00:07:42.843 "unmap": true, 00:07:42.843 "write_zeroes": true, 00:07:42.843 "flush": true, 00:07:42.843 "reset": true, 00:07:42.843 "compare": false, 00:07:42.843 "compare_and_write": false, 00:07:42.843 "abort": true, 00:07:42.843 "nvme_admin": false, 00:07:42.843 "nvme_io": false 00:07:42.843 }, 00:07:42.843 "memory_domains": [ 00:07:42.843 { 00:07:42.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.843 "dma_device_type": 2 00:07:42.843 } 00:07:42.843 ], 00:07:42.843 "driver_specific": {} 00:07:42.843 } 00:07:42.843 ] 00:07:42.843 19:08:20 -- common/autotest_common.sh@893 -- # return 0 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.843 19:08:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:43.102 19:08:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:43.102 "name": "Existed_Raid", 00:07:43.102 "uuid": "6a359419-cb6c-11ee-af6b-4feeebbbadda", 00:07:43.102 "strip_size_kb": 64, 00:07:43.102 "state": "configuring", 00:07:43.102 "raid_level": "raid0", 00:07:43.102 "superblock": true, 00:07:43.102 "num_base_bdevs": 2, 00:07:43.102 "num_base_bdevs_discovered": 1, 00:07:43.102 "num_base_bdevs_operational": 2, 00:07:43.102 "base_bdevs_list": [ 00:07:43.102 { 00:07:43.102 "name": "BaseBdev1", 00:07:43.102 "uuid": "6a5fb205-cb6c-11ee-af6b-4feeebbbadda", 00:07:43.102 "is_configured": true, 00:07:43.102 "data_offset": 2048, 00:07:43.102 "data_size": 63488 00:07:43.102 }, 00:07:43.102 { 00:07:43.102 "name": "BaseBdev2", 00:07:43.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.102 "is_configured": false, 00:07:43.102 "data_offset": 0, 00:07:43.102 "data_size": 0 00:07:43.102 } 00:07:43.102 ] 00:07:43.102 }' 00:07:43.102 19:08:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:43.102 19:08:20 -- common/autotest_common.sh@10 -- # set +x 00:07:43.670 19:08:20 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:43.670 [2024-02-14 19:08:21.070845] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.670 [2024-02-14 19:08:21.070894] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cba0500 name Existed_Raid, state configuring 00:07:43.929 19:08:21 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:43.929 19:08:21 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:43.929 19:08:21 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.187 BaseBdev1 00:07:44.187 19:08:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:44.187 19:08:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:07:44.187 19:08:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:44.187 19:08:21 -- common/autotest_common.sh@887 -- # local i 00:07:44.187 19:08:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:44.187 19:08:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:44.187 19:08:21 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:44.446 19:08:21 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.704 [ 00:07:44.704 { 00:07:44.704 "name": "BaseBdev1", 00:07:44.704 "aliases": [ 00:07:44.704 "6b7633db-cb6c-11ee-af6b-4feeebbbadda" 00:07:44.704 ], 00:07:44.704 "product_name": "Malloc disk", 00:07:44.704 "block_size": 512, 00:07:44.704 "num_blocks": 65536, 00:07:44.704 "uuid": "6b7633db-cb6c-11ee-af6b-4feeebbbadda", 00:07:44.704 "assigned_rate_limits": { 00:07:44.704 "rw_ios_per_sec": 0, 00:07:44.704 "rw_mbytes_per_sec": 0, 00:07:44.704 "r_mbytes_per_sec": 0, 00:07:44.704 "w_mbytes_per_sec": 0 00:07:44.704 }, 00:07:44.704 "claimed": false, 00:07:44.704 "zoned": false, 00:07:44.704 "supported_io_types": { 00:07:44.704 "read": true, 00:07:44.704 "write": true, 00:07:44.704 "unmap": true, 00:07:44.704 "write_zeroes": true, 00:07:44.704 "flush": true, 00:07:44.704 "reset": true, 00:07:44.704 "compare": false, 00:07:44.704 "compare_and_write": false, 00:07:44.704 "abort": true, 00:07:44.704 "nvme_admin": false, 00:07:44.704 "nvme_io": false 00:07:44.704 }, 00:07:44.704 "memory_domains": [ 00:07:44.704 { 00:07:44.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.704 "dma_device_type": 2 00:07:44.704 } 00:07:44.704 ], 00:07:44.704 "driver_specific": {} 00:07:44.704 } 00:07:44.704 ] 00:07:44.962 19:08:22 -- common/autotest_common.sh@893 -- # return 0 00:07:44.962 19:08:22 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:45.221 [2024-02-14 19:08:22.480088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.221 [2024-02-14 19:08:22.480867] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.221 [2024-02-14 19:08:22.480929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.221 19:08:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.479 19:08:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:45.479 "name": "Existed_Raid", 00:07:45.479 "uuid": "6c0d97bf-cb6c-11ee-af6b-4feeebbbadda", 00:07:45.479 "strip_size_kb": 64, 00:07:45.479 "state": "configuring", 00:07:45.479 "raid_level": "raid0", 00:07:45.479 "superblock": true, 00:07:45.479 "num_base_bdevs": 2, 00:07:45.479 "num_base_bdevs_discovered": 1, 00:07:45.479 "num_base_bdevs_operational": 2, 00:07:45.479 "base_bdevs_list": [ 00:07:45.479 { 00:07:45.479 "name": "BaseBdev1", 00:07:45.479 "uuid": "6b7633db-cb6c-11ee-af6b-4feeebbbadda", 00:07:45.479 "is_configured": true, 00:07:45.479 "data_offset": 2048, 00:07:45.479 "data_size": 63488 00:07:45.479 }, 00:07:45.479 { 00:07:45.479 "name": "BaseBdev2", 00:07:45.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.480 "is_configured": false, 00:07:45.480 "data_offset": 0, 00:07:45.480 "data_size": 0 00:07:45.480 } 00:07:45.480 ] 00:07:45.480 }' 00:07:45.480 19:08:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:45.480 19:08:22 -- common/autotest_common.sh@10 -- # set +x 00:07:46.046 19:08:23 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.342 [2024-02-14 19:08:23.608379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.342 [2024-02-14 19:08:23.608508] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82cba0a00 00:07:46.342 [2024-02-14 19:08:23.608519] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.342 [2024-02-14 19:08:23.608554] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82cc03ec0 00:07:46.342 [2024-02-14 19:08:23.608608] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82cba0a00 00:07:46.342 [2024-02-14 19:08:23.608617] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82cba0a00 00:07:46.342 [2024-02-14 19:08:23.608646] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.342 BaseBdev2 00:07:46.342 19:08:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:46.342 19:08:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:07:46.342 19:08:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:46.342 19:08:23 -- common/autotest_common.sh@887 -- # local i 00:07:46.342 19:08:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:46.342 19:08:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:46.342 19:08:23 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:46.606 19:08:23 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.171 [ 00:07:47.171 { 00:07:47.171 "name": "BaseBdev2", 00:07:47.171 "aliases": [ 00:07:47.171 "6cb9bb19-cb6c-11ee-af6b-4feeebbbadda" 00:07:47.171 ], 00:07:47.171 "product_name": "Malloc disk", 00:07:47.171 "block_size": 512, 00:07:47.171 "num_blocks": 65536, 00:07:47.171 "uuid": "6cb9bb19-cb6c-11ee-af6b-4feeebbbadda", 00:07:47.171 "assigned_rate_limits": { 00:07:47.171 "rw_ios_per_sec": 0, 00:07:47.171 "rw_mbytes_per_sec": 0, 00:07:47.171 "r_mbytes_per_sec": 0, 00:07:47.171 "w_mbytes_per_sec": 0 00:07:47.171 }, 00:07:47.171 "claimed": true, 00:07:47.171 "claim_type": "exclusive_write", 00:07:47.171 "zoned": false, 00:07:47.171 "supported_io_types": { 00:07:47.171 "read": true, 00:07:47.171 "write": true, 00:07:47.171 "unmap": true, 00:07:47.171 "write_zeroes": true, 00:07:47.171 "flush": true, 00:07:47.171 "reset": true, 00:07:47.171 "compare": false, 00:07:47.171 "compare_and_write": false, 00:07:47.171 "abort": true, 00:07:47.171 "nvme_admin": false, 00:07:47.171 "nvme_io": false 00:07:47.171 }, 00:07:47.171 "memory_domains": [ 00:07:47.171 { 00:07:47.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.171 "dma_device_type": 2 00:07:47.171 } 00:07:47.171 ], 00:07:47.171 "driver_specific": {} 00:07:47.171 } 00:07:47.171 ] 00:07:47.171 19:08:24 -- common/autotest_common.sh@893 -- # return 0 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.171 19:08:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.429 19:08:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:47.429 "name": "Existed_Raid", 00:07:47.429 "uuid": "6c0d97bf-cb6c-11ee-af6b-4feeebbbadda", 00:07:47.429 "strip_size_kb": 64, 00:07:47.429 "state": "online", 00:07:47.429 "raid_level": "raid0", 00:07:47.429 "superblock": true, 00:07:47.429 "num_base_bdevs": 2, 00:07:47.429 "num_base_bdevs_discovered": 2, 00:07:47.429 "num_base_bdevs_operational": 2, 00:07:47.429 "base_bdevs_list": [ 00:07:47.429 { 00:07:47.429 "name": "BaseBdev1", 00:07:47.429 "uuid": "6b7633db-cb6c-11ee-af6b-4feeebbbadda", 00:07:47.429 "is_configured": true, 00:07:47.429 "data_offset": 2048, 00:07:47.429 "data_size": 63488 00:07:47.429 }, 00:07:47.429 { 00:07:47.430 "name": "BaseBdev2", 00:07:47.430 "uuid": "6cb9bb19-cb6c-11ee-af6b-4feeebbbadda", 00:07:47.430 "is_configured": true, 00:07:47.430 "data_offset": 2048, 00:07:47.430 "data_size": 63488 00:07:47.430 } 00:07:47.430 ] 00:07:47.430 }' 00:07:47.430 19:08:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:47.430 19:08:24 -- common/autotest_common.sh@10 -- # set +x 00:07:47.688 19:08:25 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:47.948 [2024-02-14 19:08:25.240346] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.948 [2024-02-14 19:08:25.240379] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.948 [2024-02-14 19:08:25.240395] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.948 19:08:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.207 19:08:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:48.207 "name": "Existed_Raid", 00:07:48.207 "uuid": "6c0d97bf-cb6c-11ee-af6b-4feeebbbadda", 00:07:48.207 "strip_size_kb": 64, 00:07:48.207 "state": "offline", 00:07:48.207 "raid_level": "raid0", 00:07:48.207 "superblock": true, 00:07:48.207 "num_base_bdevs": 2, 00:07:48.207 "num_base_bdevs_discovered": 1, 00:07:48.207 "num_base_bdevs_operational": 1, 00:07:48.207 "base_bdevs_list": [ 00:07:48.207 { 00:07:48.207 "name": null, 00:07:48.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.207 "is_configured": false, 00:07:48.207 "data_offset": 2048, 00:07:48.207 "data_size": 63488 00:07:48.207 }, 00:07:48.207 { 00:07:48.207 "name": "BaseBdev2", 00:07:48.207 "uuid": "6cb9bb19-cb6c-11ee-af6b-4feeebbbadda", 00:07:48.207 "is_configured": true, 00:07:48.207 "data_offset": 2048, 00:07:48.207 "data_size": 63488 00:07:48.207 } 00:07:48.207 ] 00:07:48.207 }' 00:07:48.208 19:08:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:48.208 19:08:25 -- common/autotest_common.sh@10 -- # set +x 00:07:48.466 19:08:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:48.466 19:08:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:48.466 19:08:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:48.466 19:08:25 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.725 19:08:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:48.725 19:08:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.725 19:08:25 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:48.725 [2024-02-14 19:08:26.133707] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.725 [2024-02-14 19:08:26.133746] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82cba0a00 name Existed_Raid, state offline 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:48.984 19:08:26 -- bdev/bdev_raid.sh@287 -- # killprocess 48929 00:07:48.984 19:08:26 -- common/autotest_common.sh@924 -- # '[' -z 48929 ']' 00:07:48.984 19:08:26 -- common/autotest_common.sh@928 -- # kill -0 48929 00:07:48.984 19:08:26 -- common/autotest_common.sh@929 -- # uname 00:07:48.984 19:08:26 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:48.984 19:08:26 -- common/autotest_common.sh@932 -- # ps -c -o command 48929 00:07:48.985 19:08:26 -- common/autotest_common.sh@932 -- # tail -1 00:07:48.985 19:08:26 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:07:48.985 killing process with pid 48929 00:07:48.985 19:08:26 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:07:48.985 19:08:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 48929' 00:07:48.985 19:08:26 -- common/autotest_common.sh@943 -- # kill 48929 00:07:48.985 [2024-02-14 19:08:26.369088] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.985 [2024-02-14 19:08:26.369135] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.985 19:08:26 -- common/autotest_common.sh@948 -- # wait 48929 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:49.244 00:07:49.244 real 0m9.677s 00:07:49.244 user 0m16.584s 00:07:49.244 sys 0m1.952s 00:07:49.244 ************************************ 00:07:49.244 END TEST raid_state_function_test_sb 00:07:49.244 ************************************ 00:07:49.244 19:08:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.244 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:49.244 19:08:26 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:49.244 19:08:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:49.244 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.244 ************************************ 00:07:49.244 START TEST raid_superblock_test 00:07:49.244 ************************************ 00:07:49.244 19:08:26 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid0 2 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=49128 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49128 /var/tmp/spdk-raid.sock 00:07:49.244 19:08:26 -- common/autotest_common.sh@817 -- # '[' -z 49128 ']' 00:07:49.244 19:08:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:49.244 19:08:26 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:49.244 19:08:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:49.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:49.244 19:08:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:49.244 19:08:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:49.244 19:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:49.244 [2024-02-14 19:08:26.656393] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:49.244 [2024-02-14 19:08:26.656573] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:50.182 EAL: TSC is not safe to use in SMP mode 00:07:50.182 EAL: TSC is not invariant 00:07:50.182 [2024-02-14 19:08:27.573661] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.442 [2024-02-14 19:08:27.685357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.442 [2024-02-14 19:08:27.685823] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.442 [2024-02-14 19:08:27.685827] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.010 19:08:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:51.010 19:08:28 -- common/autotest_common.sh@850 -- # return 0 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:51.010 19:08:28 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:51.268 malloc1 00:07:51.268 19:08:28 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:51.527 [2024-02-14 19:08:28.732681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:51.527 [2024-02-14 19:08:28.732744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.527 [2024-02-14 19:08:28.733342] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb8d780 00:07:51.527 [2024-02-14 19:08:28.733368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.527 [2024-02-14 19:08:28.734412] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.527 [2024-02-14 19:08:28.734439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:51.527 pt1 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:51.527 19:08:28 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:51.786 malloc2 00:07:51.786 19:08:29 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.045 [2024-02-14 19:08:29.252718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.045 [2024-02-14 19:08:29.252782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.045 [2024-02-14 19:08:29.252814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb8dc80 00:07:52.045 [2024-02-14 19:08:29.252822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.045 [2024-02-14 19:08:29.253516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.045 [2024-02-14 19:08:29.253544] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.045 pt2 00:07:52.045 19:08:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:52.045 19:08:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:52.045 19:08:29 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:52.305 [2024-02-14 19:08:29.524759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:52.305 [2024-02-14 19:08:29.525396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.305 [2024-02-14 19:08:29.525450] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bb8df00 00:07:52.305 [2024-02-14 19:08:29.525455] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.305 [2024-02-14 19:08:29.525492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bbf0e20 00:07:52.305 [2024-02-14 19:08:29.525558] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bb8df00 00:07:52.305 [2024-02-14 19:08:29.525561] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bb8df00 00:07:52.305 [2024-02-14 19:08:29.525580] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:52.305 19:08:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.564 19:08:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:52.564 "name": "raid_bdev1", 00:07:52.564 "uuid": "704086aa-cb6c-11ee-af6b-4feeebbbadda", 00:07:52.564 "strip_size_kb": 64, 00:07:52.564 "state": "online", 00:07:52.564 "raid_level": "raid0", 00:07:52.564 "superblock": true, 00:07:52.564 "num_base_bdevs": 2, 00:07:52.564 "num_base_bdevs_discovered": 2, 00:07:52.564 "num_base_bdevs_operational": 2, 00:07:52.564 "base_bdevs_list": [ 00:07:52.564 { 00:07:52.564 "name": "pt1", 00:07:52.564 "uuid": "e3cb8b57-3f41-eb52-bd2e-65203f070119", 00:07:52.564 "is_configured": true, 00:07:52.564 "data_offset": 2048, 00:07:52.564 "data_size": 63488 00:07:52.564 }, 00:07:52.564 { 00:07:52.564 "name": "pt2", 00:07:52.564 "uuid": "b6a07f6d-f46b-565a-9968-efe4df2b7155", 00:07:52.564 "is_configured": true, 00:07:52.564 "data_offset": 2048, 00:07:52.564 "data_size": 63488 00:07:52.564 } 00:07:52.564 ] 00:07:52.564 }' 00:07:52.564 19:08:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:52.564 19:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:52.824 19:08:30 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:52.824 19:08:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:53.083 [2024-02-14 19:08:30.356821] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.083 19:08:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=704086aa-cb6c-11ee-af6b-4feeebbbadda 00:07:53.083 19:08:30 -- bdev/bdev_raid.sh@380 -- # '[' -z 704086aa-cb6c-11ee-af6b-4feeebbbadda ']' 00:07:53.083 19:08:30 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:53.343 [2024-02-14 19:08:30.548805] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.343 [2024-02-14 19:08:30.548827] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.343 [2024-02-14 19:08:30.548841] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.343 [2024-02-14 19:08:30.548852] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.343 [2024-02-14 19:08:30.548856] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb8df00 name raid_bdev1, state offline 00:07:53.343 19:08:30 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.343 19:08:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:53.602 19:08:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:53.602 19:08:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:53.602 19:08:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:53.602 19:08:30 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:53.602 19:08:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:53.602 19:08:31 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:53.862 19:08:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:53.862 19:08:31 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:54.122 19:08:31 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:54.122 19:08:31 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:54.122 19:08:31 -- common/autotest_common.sh@638 -- # local es=0 00:07:54.122 19:08:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:54.122 19:08:31 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.122 19:08:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:54.122 19:08:31 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.122 19:08:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:54.122 19:08:31 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.122 19:08:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:54.122 19:08:31 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:54.122 19:08:31 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:54.122 19:08:31 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:54.381 [2024-02-14 19:08:31.688889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.381 [2024-02-14 19:08:31.689600] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.381 [2024-02-14 19:08:31.689622] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:54.381 [2024-02-14 19:08:31.689657] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:54.381 [2024-02-14 19:08:31.689666] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.381 [2024-02-14 19:08:31.689669] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb8dc80 name raid_bdev1, state configuring 00:07:54.381 request: 00:07:54.381 { 00:07:54.381 "name": "raid_bdev1", 00:07:54.381 "raid_level": "raid0", 00:07:54.381 "base_bdevs": [ 00:07:54.381 "malloc1", 00:07:54.381 "malloc2" 00:07:54.381 ], 00:07:54.381 "superblock": false, 00:07:54.381 "strip_size_kb": 64, 00:07:54.381 "method": "bdev_raid_create", 00:07:54.381 "req_id": 1 00:07:54.381 } 00:07:54.381 Got JSON-RPC error response 00:07:54.381 response: 00:07:54.381 { 00:07:54.381 "code": -17, 00:07:54.382 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.382 } 00:07:54.382 19:08:31 -- common/autotest_common.sh@641 -- # es=1 00:07:54.382 19:08:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:54.382 19:08:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:54.382 19:08:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:54.382 19:08:31 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.382 19:08:31 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:54.640 19:08:31 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:54.640 19:08:31 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:54.640 19:08:31 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.918 [2024-02-14 19:08:32.168922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.918 [2024-02-14 19:08:32.168975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.918 [2024-02-14 19:08:32.169007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb8d780 00:07:54.918 [2024-02-14 19:08:32.169014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.918 [2024-02-14 19:08:32.169771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.918 [2024-02-14 19:08:32.169800] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.918 [2024-02-14 19:08:32.169819] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:54.918 [2024-02-14 19:08:32.169831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.918 pt1 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.918 19:08:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.176 19:08:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:55.176 "name": "raid_bdev1", 00:07:55.176 "uuid": "704086aa-cb6c-11ee-af6b-4feeebbbadda", 00:07:55.176 "strip_size_kb": 64, 00:07:55.176 "state": "configuring", 00:07:55.176 "raid_level": "raid0", 00:07:55.176 "superblock": true, 00:07:55.176 "num_base_bdevs": 2, 00:07:55.176 "num_base_bdevs_discovered": 1, 00:07:55.176 "num_base_bdevs_operational": 2, 00:07:55.176 "base_bdevs_list": [ 00:07:55.176 { 00:07:55.176 "name": "pt1", 00:07:55.176 "uuid": "e3cb8b57-3f41-eb52-bd2e-65203f070119", 00:07:55.176 "is_configured": true, 00:07:55.176 "data_offset": 2048, 00:07:55.176 "data_size": 63488 00:07:55.176 }, 00:07:55.176 { 00:07:55.176 "name": null, 00:07:55.176 "uuid": "b6a07f6d-f46b-565a-9968-efe4df2b7155", 00:07:55.176 "is_configured": false, 00:07:55.176 "data_offset": 2048, 00:07:55.176 "data_size": 63488 00:07:55.176 } 00:07:55.176 ] 00:07:55.176 }' 00:07:55.176 19:08:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:55.176 19:08:32 -- common/autotest_common.sh@10 -- # set +x 00:07:55.435 19:08:32 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:07:55.435 19:08:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:55.435 19:08:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:55.435 19:08:32 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.693 [2024-02-14 19:08:32.880969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.693 [2024-02-14 19:08:32.881020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.693 [2024-02-14 19:08:32.881051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb8df00 00:07:55.693 [2024-02-14 19:08:32.881059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.693 [2024-02-14 19:08:32.881162] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.693 [2024-02-14 19:08:32.881170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.693 [2024-02-14 19:08:32.881185] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:55.693 [2024-02-14 19:08:32.881192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.693 [2024-02-14 19:08:32.881212] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bb8e180 00:07:55.693 [2024-02-14 19:08:32.881215] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.693 [2024-02-14 19:08:32.881230] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bbf0e20 00:07:55.693 [2024-02-14 19:08:32.881272] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bb8e180 00:07:55.693 [2024-02-14 19:08:32.881275] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bb8e180 00:07:55.693 [2024-02-14 19:08:32.881291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.693 pt2 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.693 19:08:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.693 19:08:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:55.693 "name": "raid_bdev1", 00:07:55.693 "uuid": "704086aa-cb6c-11ee-af6b-4feeebbbadda", 00:07:55.693 "strip_size_kb": 64, 00:07:55.693 "state": "online", 00:07:55.693 "raid_level": "raid0", 00:07:55.693 "superblock": true, 00:07:55.693 "num_base_bdevs": 2, 00:07:55.693 "num_base_bdevs_discovered": 2, 00:07:55.693 "num_base_bdevs_operational": 2, 00:07:55.693 "base_bdevs_list": [ 00:07:55.693 { 00:07:55.693 "name": "pt1", 00:07:55.693 "uuid": "e3cb8b57-3f41-eb52-bd2e-65203f070119", 00:07:55.693 "is_configured": true, 00:07:55.693 "data_offset": 2048, 00:07:55.693 "data_size": 63488 00:07:55.693 }, 00:07:55.693 { 00:07:55.693 "name": "pt2", 00:07:55.693 "uuid": "b6a07f6d-f46b-565a-9968-efe4df2b7155", 00:07:55.693 "is_configured": true, 00:07:55.693 "data_offset": 2048, 00:07:55.693 "data_size": 63488 00:07:55.693 } 00:07:55.693 ] 00:07:55.693 }' 00:07:55.693 19:08:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:55.693 19:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:55.954 19:08:33 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:55.954 19:08:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:56.211 [2024-02-14 19:08:33.617036] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.469 19:08:33 -- bdev/bdev_raid.sh@430 -- # '[' 704086aa-cb6c-11ee-af6b-4feeebbbadda '!=' 704086aa-cb6c-11ee-af6b-4feeebbbadda ']' 00:07:56.469 19:08:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:07:56.469 19:08:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:56.469 19:08:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:56.469 19:08:33 -- bdev/bdev_raid.sh@511 -- # killprocess 49128 00:07:56.469 19:08:33 -- common/autotest_common.sh@924 -- # '[' -z 49128 ']' 00:07:56.469 19:08:33 -- common/autotest_common.sh@928 -- # kill -0 49128 00:07:56.469 19:08:33 -- common/autotest_common.sh@929 -- # uname 00:07:56.469 19:08:33 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:07:56.469 19:08:33 -- common/autotest_common.sh@932 -- # ps -c -o command 49128 00:07:56.469 19:08:33 -- common/autotest_common.sh@932 -- # tail -1 00:07:56.469 19:08:33 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:07:56.469 killing process with pid 49128 00:07:56.469 19:08:33 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:07:56.469 19:08:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 49128' 00:07:56.469 19:08:33 -- common/autotest_common.sh@943 -- # kill 49128 00:07:56.469 [2024-02-14 19:08:33.651557] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.469 19:08:33 -- common/autotest_common.sh@948 -- # wait 49128 00:07:56.470 [2024-02-14 19:08:33.651586] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.470 [2024-02-14 19:08:33.651597] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.470 [2024-02-14 19:08:33.651601] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bb8e180 name raid_bdev1, state offline 00:07:56.470 [2024-02-14 19:08:33.670350] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:56.728 00:07:56.728 real 0m7.258s 00:07:56.728 user 0m11.461s 00:07:56.728 sys 0m1.535s 00:07:56.728 19:08:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.728 19:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.728 ************************************ 00:07:56.728 END TEST raid_superblock_test 00:07:56.728 ************************************ 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:56.728 19:08:33 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:07:56.728 19:08:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:56.728 19:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.728 ************************************ 00:07:56.728 START TEST raid_state_function_test 00:07:56.728 ************************************ 00:07:56.728 19:08:33 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 2 false 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:56.728 19:08:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=49275 00:07:56.729 Process raid pid: 49275 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49275' 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49275 /var/tmp/spdk-raid.sock 00:07:56.729 19:08:33 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:56.729 19:08:33 -- common/autotest_common.sh@817 -- # '[' -z 49275 ']' 00:07:56.729 19:08:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:56.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:56.729 19:08:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:56.729 19:08:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:56.729 19:08:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:56.729 19:08:33 -- common/autotest_common.sh@10 -- # set +x 00:07:56.729 [2024-02-14 19:08:33.964619] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:07:56.729 [2024-02-14 19:08:33.964932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:57.664 EAL: TSC is not safe to use in SMP mode 00:07:57.664 EAL: TSC is not invariant 00:07:57.664 [2024-02-14 19:08:34.720600] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.664 [2024-02-14 19:08:34.834296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.664 [2024-02-14 19:08:34.834757] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.664 [2024-02-14 19:08:34.834765] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.664 19:08:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:57.664 19:08:34 -- common/autotest_common.sh@850 -- # return 0 00:07:57.664 19:08:34 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:57.922 [2024-02-14 19:08:35.213914] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:57.922 [2024-02-14 19:08:35.213978] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:57.922 [2024-02-14 19:08:35.213982] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.922 [2024-02-14 19:08:35.213990] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.922 19:08:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.179 19:08:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:58.180 "name": "Existed_Raid", 00:07:58.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.180 "strip_size_kb": 64, 00:07:58.180 "state": "configuring", 00:07:58.180 "raid_level": "concat", 00:07:58.180 "superblock": false, 00:07:58.180 "num_base_bdevs": 2, 00:07:58.180 "num_base_bdevs_discovered": 0, 00:07:58.180 "num_base_bdevs_operational": 2, 00:07:58.180 "base_bdevs_list": [ 00:07:58.180 { 00:07:58.180 "name": "BaseBdev1", 00:07:58.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.180 "is_configured": false, 00:07:58.180 "data_offset": 0, 00:07:58.180 "data_size": 0 00:07:58.180 }, 00:07:58.180 { 00:07:58.180 "name": "BaseBdev2", 00:07:58.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.180 "is_configured": false, 00:07:58.180 "data_offset": 0, 00:07:58.180 "data_size": 0 00:07:58.180 } 00:07:58.180 ] 00:07:58.180 }' 00:07:58.180 19:08:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:58.180 19:08:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.438 19:08:35 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:58.695 [2024-02-14 19:08:35.977929] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:58.695 [2024-02-14 19:08:35.977954] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0c7500 name Existed_Raid, state configuring 00:07:58.695 19:08:35 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:58.954 [2024-02-14 19:08:36.153945] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.954 [2024-02-14 19:08:36.153990] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.954 [2024-02-14 19:08:36.153994] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:58.954 [2024-02-14 19:08:36.154000] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:58.954 19:08:36 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:58.954 [2024-02-14 19:08:36.363307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.954 BaseBdev1 00:07:59.213 19:08:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:59.213 19:08:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:07:59.213 19:08:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:07:59.213 19:08:36 -- common/autotest_common.sh@887 -- # local i 00:07:59.213 19:08:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:07:59.213 19:08:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:07:59.213 19:08:36 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:59.213 19:08:36 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.471 [ 00:07:59.471 { 00:07:59.471 "name": "BaseBdev1", 00:07:59.471 "aliases": [ 00:07:59.471 "7453ceed-cb6c-11ee-af6b-4feeebbbadda" 00:07:59.471 ], 00:07:59.471 "product_name": "Malloc disk", 00:07:59.471 "block_size": 512, 00:07:59.471 "num_blocks": 65536, 00:07:59.471 "uuid": "7453ceed-cb6c-11ee-af6b-4feeebbbadda", 00:07:59.471 "assigned_rate_limits": { 00:07:59.471 "rw_ios_per_sec": 0, 00:07:59.471 "rw_mbytes_per_sec": 0, 00:07:59.471 "r_mbytes_per_sec": 0, 00:07:59.471 "w_mbytes_per_sec": 0 00:07:59.471 }, 00:07:59.471 "claimed": true, 00:07:59.471 "claim_type": "exclusive_write", 00:07:59.471 "zoned": false, 00:07:59.471 "supported_io_types": { 00:07:59.471 "read": true, 00:07:59.471 "write": true, 00:07:59.471 "unmap": true, 00:07:59.471 "write_zeroes": true, 00:07:59.471 "flush": true, 00:07:59.471 "reset": true, 00:07:59.471 "compare": false, 00:07:59.471 "compare_and_write": false, 00:07:59.471 "abort": true, 00:07:59.471 "nvme_admin": false, 00:07:59.471 "nvme_io": false 00:07:59.471 }, 00:07:59.471 "memory_domains": [ 00:07:59.471 { 00:07:59.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.471 "dma_device_type": 2 00:07:59.471 } 00:07:59.471 ], 00:07:59.471 "driver_specific": {} 00:07:59.471 } 00:07:59.471 ] 00:07:59.471 19:08:36 -- common/autotest_common.sh@893 -- # return 0 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:59.471 19:08:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.040 19:08:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:00.040 "name": "Existed_Raid", 00:08:00.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.040 "strip_size_kb": 64, 00:08:00.040 "state": "configuring", 00:08:00.040 "raid_level": "concat", 00:08:00.040 "superblock": false, 00:08:00.040 "num_base_bdevs": 2, 00:08:00.040 "num_base_bdevs_discovered": 1, 00:08:00.040 "num_base_bdevs_operational": 2, 00:08:00.040 "base_bdevs_list": [ 00:08:00.040 { 00:08:00.040 "name": "BaseBdev1", 00:08:00.040 "uuid": "7453ceed-cb6c-11ee-af6b-4feeebbbadda", 00:08:00.040 "is_configured": true, 00:08:00.040 "data_offset": 0, 00:08:00.040 "data_size": 65536 00:08:00.040 }, 00:08:00.040 { 00:08:00.040 "name": "BaseBdev2", 00:08:00.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.040 "is_configured": false, 00:08:00.040 "data_offset": 0, 00:08:00.040 "data_size": 0 00:08:00.040 } 00:08:00.040 ] 00:08:00.040 }' 00:08:00.040 19:08:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:00.040 19:08:37 -- common/autotest_common.sh@10 -- # set +x 00:08:00.298 19:08:37 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:00.555 [2024-02-14 19:08:37.730066] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.555 [2024-02-14 19:08:37.730103] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0c7500 name Existed_Raid, state configuring 00:08:00.555 19:08:37 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:00.555 19:08:37 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:00.814 [2024-02-14 19:08:37.990100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.814 [2024-02-14 19:08:37.991124] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.814 [2024-02-14 19:08:37.991172] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.814 19:08:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.072 19:08:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:01.072 "name": "Existed_Raid", 00:08:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.072 "strip_size_kb": 64, 00:08:01.072 "state": "configuring", 00:08:01.072 "raid_level": "concat", 00:08:01.072 "superblock": false, 00:08:01.072 "num_base_bdevs": 2, 00:08:01.072 "num_base_bdevs_discovered": 1, 00:08:01.072 "num_base_bdevs_operational": 2, 00:08:01.072 "base_bdevs_list": [ 00:08:01.072 { 00:08:01.072 "name": "BaseBdev1", 00:08:01.072 "uuid": "7453ceed-cb6c-11ee-af6b-4feeebbbadda", 00:08:01.072 "is_configured": true, 00:08:01.072 "data_offset": 0, 00:08:01.072 "data_size": 65536 00:08:01.072 }, 00:08:01.072 { 00:08:01.072 "name": "BaseBdev2", 00:08:01.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.072 "is_configured": false, 00:08:01.072 "data_offset": 0, 00:08:01.072 "data_size": 0 00:08:01.072 } 00:08:01.072 ] 00:08:01.072 }' 00:08:01.072 19:08:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:01.072 19:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:01.330 19:08:38 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.588 [2024-02-14 19:08:38.922343] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.588 [2024-02-14 19:08:38.922370] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c0c7a00 00:08:01.588 [2024-02-14 19:08:38.922374] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:01.589 [2024-02-14 19:08:38.922396] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c12aec0 00:08:01.589 [2024-02-14 19:08:38.922508] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c0c7a00 00:08:01.589 [2024-02-14 19:08:38.922511] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c0c7a00 00:08:01.589 [2024-02-14 19:08:38.922545] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.589 BaseBdev2 00:08:01.589 19:08:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:01.589 19:08:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:08:01.589 19:08:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:01.589 19:08:38 -- common/autotest_common.sh@887 -- # local i 00:08:01.589 19:08:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:01.589 19:08:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:01.589 19:08:38 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:01.847 19:08:39 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.415 [ 00:08:02.415 { 00:08:02.415 "name": "BaseBdev2", 00:08:02.415 "aliases": [ 00:08:02.415 "75da75ae-cb6c-11ee-af6b-4feeebbbadda" 00:08:02.415 ], 00:08:02.415 "product_name": "Malloc disk", 00:08:02.415 "block_size": 512, 00:08:02.415 "num_blocks": 65536, 00:08:02.415 "uuid": "75da75ae-cb6c-11ee-af6b-4feeebbbadda", 00:08:02.415 "assigned_rate_limits": { 00:08:02.415 "rw_ios_per_sec": 0, 00:08:02.415 "rw_mbytes_per_sec": 0, 00:08:02.415 "r_mbytes_per_sec": 0, 00:08:02.415 "w_mbytes_per_sec": 0 00:08:02.415 }, 00:08:02.415 "claimed": true, 00:08:02.415 "claim_type": "exclusive_write", 00:08:02.415 "zoned": false, 00:08:02.415 "supported_io_types": { 00:08:02.415 "read": true, 00:08:02.415 "write": true, 00:08:02.415 "unmap": true, 00:08:02.415 "write_zeroes": true, 00:08:02.415 "flush": true, 00:08:02.415 "reset": true, 00:08:02.415 "compare": false, 00:08:02.415 "compare_and_write": false, 00:08:02.415 "abort": true, 00:08:02.415 "nvme_admin": false, 00:08:02.415 "nvme_io": false 00:08:02.415 }, 00:08:02.415 "memory_domains": [ 00:08:02.415 { 00:08:02.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.415 "dma_device_type": 2 00:08:02.415 } 00:08:02.415 ], 00:08:02.415 "driver_specific": {} 00:08:02.415 } 00:08:02.415 ] 00:08:02.415 19:08:39 -- common/autotest_common.sh@893 -- # return 0 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.415 19:08:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.674 19:08:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:02.674 "name": "Existed_Raid", 00:08:02.674 "uuid": "75da7d77-cb6c-11ee-af6b-4feeebbbadda", 00:08:02.674 "strip_size_kb": 64, 00:08:02.674 "state": "online", 00:08:02.674 "raid_level": "concat", 00:08:02.674 "superblock": false, 00:08:02.674 "num_base_bdevs": 2, 00:08:02.674 "num_base_bdevs_discovered": 2, 00:08:02.674 "num_base_bdevs_operational": 2, 00:08:02.674 "base_bdevs_list": [ 00:08:02.674 { 00:08:02.674 "name": "BaseBdev1", 00:08:02.674 "uuid": "7453ceed-cb6c-11ee-af6b-4feeebbbadda", 00:08:02.674 "is_configured": true, 00:08:02.674 "data_offset": 0, 00:08:02.674 "data_size": 65536 00:08:02.674 }, 00:08:02.674 { 00:08:02.674 "name": "BaseBdev2", 00:08:02.674 "uuid": "75da75ae-cb6c-11ee-af6b-4feeebbbadda", 00:08:02.674 "is_configured": true, 00:08:02.674 "data_offset": 0, 00:08:02.674 "data_size": 65536 00:08:02.674 } 00:08:02.674 ] 00:08:02.674 }' 00:08:02.674 19:08:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:02.674 19:08:39 -- common/autotest_common.sh@10 -- # set +x 00:08:02.933 19:08:40 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:03.191 [2024-02-14 19:08:40.438297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.191 [2024-02-14 19:08:40.438330] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.191 [2024-02-14 19:08:40.438346] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:03.191 19:08:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:03.192 19:08:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:03.192 19:08:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.192 19:08:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.474 19:08:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:03.474 "name": "Existed_Raid", 00:08:03.474 "uuid": "75da7d77-cb6c-11ee-af6b-4feeebbbadda", 00:08:03.474 "strip_size_kb": 64, 00:08:03.474 "state": "offline", 00:08:03.474 "raid_level": "concat", 00:08:03.474 "superblock": false, 00:08:03.474 "num_base_bdevs": 2, 00:08:03.474 "num_base_bdevs_discovered": 1, 00:08:03.474 "num_base_bdevs_operational": 1, 00:08:03.474 "base_bdevs_list": [ 00:08:03.474 { 00:08:03.474 "name": null, 00:08:03.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.474 "is_configured": false, 00:08:03.474 "data_offset": 0, 00:08:03.474 "data_size": 65536 00:08:03.474 }, 00:08:03.474 { 00:08:03.474 "name": "BaseBdev2", 00:08:03.474 "uuid": "75da75ae-cb6c-11ee-af6b-4feeebbbadda", 00:08:03.474 "is_configured": true, 00:08:03.474 "data_offset": 0, 00:08:03.474 "data_size": 65536 00:08:03.474 } 00:08:03.474 ] 00:08:03.474 }' 00:08:03.474 19:08:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:03.474 19:08:40 -- common/autotest_common.sh@10 -- # set +x 00:08:03.749 19:08:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:03.749 19:08:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:03.749 19:08:41 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.749 19:08:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:04.007 19:08:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:04.007 19:08:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.007 19:08:41 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:04.266 [2024-02-14 19:08:41.631614] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.266 [2024-02-14 19:08:41.631647] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c0c7a00 name Existed_Raid, state offline 00:08:04.266 19:08:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:04.266 19:08:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:04.266 19:08:41 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.266 19:08:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.525 19:08:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:04.525 19:08:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:04.525 19:08:41 -- bdev/bdev_raid.sh@287 -- # killprocess 49275 00:08:04.525 19:08:41 -- common/autotest_common.sh@924 -- # '[' -z 49275 ']' 00:08:04.525 19:08:41 -- common/autotest_common.sh@928 -- # kill -0 49275 00:08:04.525 19:08:41 -- common/autotest_common.sh@929 -- # uname 00:08:04.783 19:08:41 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:04.783 19:08:41 -- common/autotest_common.sh@932 -- # tail -1 00:08:04.783 19:08:41 -- common/autotest_common.sh@932 -- # ps -c -o command 49275 00:08:04.783 19:08:41 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:04.783 19:08:41 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:04.783 killing process with pid 49275 00:08:04.783 19:08:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 49275' 00:08:04.783 19:08:41 -- common/autotest_common.sh@943 -- # kill 49275 00:08:04.783 [2024-02-14 19:08:41.949897] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.783 [2024-02-14 19:08:41.949950] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.783 19:08:41 -- common/autotest_common.sh@948 -- # wait 49275 00:08:04.783 19:08:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:04.783 00:08:04.783 real 0m8.238s 00:08:04.783 user 0m13.922s 00:08:04.783 sys 0m1.758s 00:08:04.783 19:08:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.783 19:08:42 -- common/autotest_common.sh@10 -- # set +x 00:08:04.783 ************************************ 00:08:04.783 END TEST raid_state_function_test 00:08:04.783 ************************************ 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:05.042 19:08:42 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:08:05.042 19:08:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:05.042 19:08:42 -- common/autotest_common.sh@10 -- # set +x 00:08:05.042 ************************************ 00:08:05.042 START TEST raid_state_function_test_sb 00:08:05.042 ************************************ 00:08:05.042 19:08:42 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 2 true 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@226 -- # raid_pid=49471 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49471' 00:08:05.042 Process raid pid: 49471 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49471 /var/tmp/spdk-raid.sock 00:08:05.042 19:08:42 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:05.042 19:08:42 -- common/autotest_common.sh@817 -- # '[' -z 49471 ']' 00:08:05.042 19:08:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:05.042 19:08:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:05.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:05.042 19:08:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:05.042 19:08:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:05.042 19:08:42 -- common/autotest_common.sh@10 -- # set +x 00:08:05.042 [2024-02-14 19:08:42.248886] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:05.042 [2024-02-14 19:08:42.249157] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:05.980 EAL: TSC is not safe to use in SMP mode 00:08:05.980 EAL: TSC is not invariant 00:08:05.980 [2024-02-14 19:08:43.042775] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.980 [2024-02-14 19:08:43.174681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.980 [2024-02-14 19:08:43.175391] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.980 [2024-02-14 19:08:43.175410] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.980 19:08:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:05.980 19:08:43 -- common/autotest_common.sh@850 -- # return 0 00:08:05.980 19:08:43 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:06.239 [2024-02-14 19:08:43.440135] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.239 [2024-02-14 19:08:43.440206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.239 [2024-02-14 19:08:43.440211] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.239 [2024-02-14 19:08:43.440219] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.239 19:08:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.498 19:08:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:06.498 "name": "Existed_Raid", 00:08:06.498 "uuid": "788bd7f6-cb6c-11ee-af6b-4feeebbbadda", 00:08:06.498 "strip_size_kb": 64, 00:08:06.498 "state": "configuring", 00:08:06.498 "raid_level": "concat", 00:08:06.498 "superblock": true, 00:08:06.498 "num_base_bdevs": 2, 00:08:06.498 "num_base_bdevs_discovered": 0, 00:08:06.498 "num_base_bdevs_operational": 2, 00:08:06.498 "base_bdevs_list": [ 00:08:06.498 { 00:08:06.498 "name": "BaseBdev1", 00:08:06.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.498 "is_configured": false, 00:08:06.498 "data_offset": 0, 00:08:06.498 "data_size": 0 00:08:06.498 }, 00:08:06.498 { 00:08:06.498 "name": "BaseBdev2", 00:08:06.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.498 "is_configured": false, 00:08:06.498 "data_offset": 0, 00:08:06.498 "data_size": 0 00:08:06.498 } 00:08:06.498 ] 00:08:06.498 }' 00:08:06.498 19:08:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:06.498 19:08:43 -- common/autotest_common.sh@10 -- # set +x 00:08:07.066 19:08:44 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:07.066 [2024-02-14 19:08:44.428160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.066 [2024-02-14 19:08:44.428192] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b615500 name Existed_Raid, state configuring 00:08:07.066 19:08:44 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:07.324 [2024-02-14 19:08:44.700210] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.324 [2024-02-14 19:08:44.700289] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.324 [2024-02-14 19:08:44.700294] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.324 [2024-02-14 19:08:44.700303] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.324 19:08:44 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:07.582 [2024-02-14 19:08:44.981447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.582 BaseBdev1 00:08:07.839 19:08:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:07.839 19:08:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:07.839 19:08:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:07.839 19:08:45 -- common/autotest_common.sh@887 -- # local i 00:08:07.839 19:08:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:07.839 19:08:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:07.839 19:08:45 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:08.097 19:08:45 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.355 [ 00:08:08.355 { 00:08:08.355 "name": "BaseBdev1", 00:08:08.355 "aliases": [ 00:08:08.355 "7976d7a5-cb6c-11ee-af6b-4feeebbbadda" 00:08:08.355 ], 00:08:08.355 "product_name": "Malloc disk", 00:08:08.355 "block_size": 512, 00:08:08.355 "num_blocks": 65536, 00:08:08.355 "uuid": "7976d7a5-cb6c-11ee-af6b-4feeebbbadda", 00:08:08.355 "assigned_rate_limits": { 00:08:08.355 "rw_ios_per_sec": 0, 00:08:08.355 "rw_mbytes_per_sec": 0, 00:08:08.355 "r_mbytes_per_sec": 0, 00:08:08.355 "w_mbytes_per_sec": 0 00:08:08.355 }, 00:08:08.355 "claimed": true, 00:08:08.355 "claim_type": "exclusive_write", 00:08:08.355 "zoned": false, 00:08:08.355 "supported_io_types": { 00:08:08.355 "read": true, 00:08:08.355 "write": true, 00:08:08.355 "unmap": true, 00:08:08.355 "write_zeroes": true, 00:08:08.355 "flush": true, 00:08:08.355 "reset": true, 00:08:08.355 "compare": false, 00:08:08.355 "compare_and_write": false, 00:08:08.355 "abort": true, 00:08:08.355 "nvme_admin": false, 00:08:08.355 "nvme_io": false 00:08:08.355 }, 00:08:08.355 "memory_domains": [ 00:08:08.355 { 00:08:08.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.355 "dma_device_type": 2 00:08:08.355 } 00:08:08.355 ], 00:08:08.355 "driver_specific": {} 00:08:08.355 } 00:08:08.355 ] 00:08:08.355 19:08:45 -- common/autotest_common.sh@893 -- # return 0 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.355 19:08:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.614 19:08:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:08.614 "name": "Existed_Raid", 00:08:08.614 "uuid": "794c1da9-cb6c-11ee-af6b-4feeebbbadda", 00:08:08.614 "strip_size_kb": 64, 00:08:08.614 "state": "configuring", 00:08:08.614 "raid_level": "concat", 00:08:08.614 "superblock": true, 00:08:08.614 "num_base_bdevs": 2, 00:08:08.614 "num_base_bdevs_discovered": 1, 00:08:08.614 "num_base_bdevs_operational": 2, 00:08:08.614 "base_bdevs_list": [ 00:08:08.614 { 00:08:08.614 "name": "BaseBdev1", 00:08:08.614 "uuid": "7976d7a5-cb6c-11ee-af6b-4feeebbbadda", 00:08:08.614 "is_configured": true, 00:08:08.614 "data_offset": 2048, 00:08:08.614 "data_size": 63488 00:08:08.614 }, 00:08:08.614 { 00:08:08.614 "name": "BaseBdev2", 00:08:08.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.614 "is_configured": false, 00:08:08.614 "data_offset": 0, 00:08:08.614 "data_size": 0 00:08:08.614 } 00:08:08.614 ] 00:08:08.614 }' 00:08:08.614 19:08:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:08.614 19:08:45 -- common/autotest_common.sh@10 -- # set +x 00:08:08.872 19:08:46 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:09.131 [2024-02-14 19:08:46.396288] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.131 [2024-02-14 19:08:46.396329] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b615500 name Existed_Raid, state configuring 00:08:09.131 19:08:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:09.132 19:08:46 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:09.390 19:08:46 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.649 BaseBdev1 00:08:09.649 19:08:46 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:09.649 19:08:46 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:09.649 19:08:46 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:09.649 19:08:46 -- common/autotest_common.sh@887 -- # local i 00:08:09.649 19:08:46 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:09.649 19:08:46 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:09.649 19:08:46 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:09.907 19:08:47 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:10.166 [ 00:08:10.166 { 00:08:10.166 "name": "BaseBdev1", 00:08:10.166 "aliases": [ 00:08:10.166 "7aa71b91-cb6c-11ee-af6b-4feeebbbadda" 00:08:10.166 ], 00:08:10.166 "product_name": "Malloc disk", 00:08:10.166 "block_size": 512, 00:08:10.166 "num_blocks": 65536, 00:08:10.166 "uuid": "7aa71b91-cb6c-11ee-af6b-4feeebbbadda", 00:08:10.166 "assigned_rate_limits": { 00:08:10.166 "rw_ios_per_sec": 0, 00:08:10.166 "rw_mbytes_per_sec": 0, 00:08:10.166 "r_mbytes_per_sec": 0, 00:08:10.166 "w_mbytes_per_sec": 0 00:08:10.166 }, 00:08:10.166 "claimed": false, 00:08:10.166 "zoned": false, 00:08:10.166 "supported_io_types": { 00:08:10.166 "read": true, 00:08:10.166 "write": true, 00:08:10.166 "unmap": true, 00:08:10.166 "write_zeroes": true, 00:08:10.166 "flush": true, 00:08:10.166 "reset": true, 00:08:10.166 "compare": false, 00:08:10.166 "compare_and_write": false, 00:08:10.166 "abort": true, 00:08:10.166 "nvme_admin": false, 00:08:10.166 "nvme_io": false 00:08:10.166 }, 00:08:10.166 "memory_domains": [ 00:08:10.166 { 00:08:10.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.166 "dma_device_type": 2 00:08:10.166 } 00:08:10.166 ], 00:08:10.166 "driver_specific": {} 00:08:10.166 } 00:08:10.166 ] 00:08:10.166 19:08:47 -- common/autotest_common.sh@893 -- # return 0 00:08:10.166 19:08:47 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:10.425 [2024-02-14 19:08:47.734303] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:10.425 [2024-02-14 19:08:47.735025] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.425 [2024-02-14 19:08:47.735069] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.425 19:08:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.693 19:08:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:10.693 "name": "Existed_Raid", 00:08:10.693 "uuid": "7b1b14d2-cb6c-11ee-af6b-4feeebbbadda", 00:08:10.693 "strip_size_kb": 64, 00:08:10.693 "state": "configuring", 00:08:10.693 "raid_level": "concat", 00:08:10.693 "superblock": true, 00:08:10.693 "num_base_bdevs": 2, 00:08:10.693 "num_base_bdevs_discovered": 1, 00:08:10.693 "num_base_bdevs_operational": 2, 00:08:10.693 "base_bdevs_list": [ 00:08:10.693 { 00:08:10.693 "name": "BaseBdev1", 00:08:10.693 "uuid": "7aa71b91-cb6c-11ee-af6b-4feeebbbadda", 00:08:10.693 "is_configured": true, 00:08:10.693 "data_offset": 2048, 00:08:10.693 "data_size": 63488 00:08:10.693 }, 00:08:10.693 { 00:08:10.693 "name": "BaseBdev2", 00:08:10.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.693 "is_configured": false, 00:08:10.693 "data_offset": 0, 00:08:10.693 "data_size": 0 00:08:10.693 } 00:08:10.693 ] 00:08:10.693 }' 00:08:10.693 19:08:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:10.693 19:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.963 19:08:48 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.222 [2024-02-14 19:08:48.562491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.222 [2024-02-14 19:08:48.562566] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b615a00 00:08:11.222 [2024-02-14 19:08:48.562571] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.222 [2024-02-14 19:08:48.562589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b678ec0 00:08:11.222 [2024-02-14 19:08:48.562625] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b615a00 00:08:11.222 [2024-02-14 19:08:48.562629] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b615a00 00:08:11.222 [2024-02-14 19:08:48.562644] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.222 BaseBdev2 00:08:11.222 19:08:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:11.222 19:08:48 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:08:11.222 19:08:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:11.222 19:08:48 -- common/autotest_common.sh@887 -- # local i 00:08:11.222 19:08:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:11.222 19:08:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:11.222 19:08:48 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:11.481 19:08:48 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.740 [ 00:08:11.740 { 00:08:11.740 "name": "BaseBdev2", 00:08:11.740 "aliases": [ 00:08:11.740 "7b996e96-cb6c-11ee-af6b-4feeebbbadda" 00:08:11.740 ], 00:08:11.740 "product_name": "Malloc disk", 00:08:11.740 "block_size": 512, 00:08:11.740 "num_blocks": 65536, 00:08:11.740 "uuid": "7b996e96-cb6c-11ee-af6b-4feeebbbadda", 00:08:11.740 "assigned_rate_limits": { 00:08:11.740 "rw_ios_per_sec": 0, 00:08:11.740 "rw_mbytes_per_sec": 0, 00:08:11.740 "r_mbytes_per_sec": 0, 00:08:11.740 "w_mbytes_per_sec": 0 00:08:11.740 }, 00:08:11.740 "claimed": true, 00:08:11.740 "claim_type": "exclusive_write", 00:08:11.740 "zoned": false, 00:08:11.740 "supported_io_types": { 00:08:11.740 "read": true, 00:08:11.740 "write": true, 00:08:11.740 "unmap": true, 00:08:11.740 "write_zeroes": true, 00:08:11.740 "flush": true, 00:08:11.740 "reset": true, 00:08:11.740 "compare": false, 00:08:11.740 "compare_and_write": false, 00:08:11.740 "abort": true, 00:08:11.740 "nvme_admin": false, 00:08:11.740 "nvme_io": false 00:08:11.740 }, 00:08:11.740 "memory_domains": [ 00:08:11.740 { 00:08:11.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.740 "dma_device_type": 2 00:08:11.740 } 00:08:11.740 ], 00:08:11.740 "driver_specific": {} 00:08:11.740 } 00:08:11.740 ] 00:08:11.740 19:08:49 -- common/autotest_common.sh@893 -- # return 0 00:08:11.740 19:08:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:11.740 19:08:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:11.740 19:08:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.999 19:08:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.259 19:08:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:12.259 "name": "Existed_Raid", 00:08:12.259 "uuid": "7b1b14d2-cb6c-11ee-af6b-4feeebbbadda", 00:08:12.259 "strip_size_kb": 64, 00:08:12.259 "state": "online", 00:08:12.259 "raid_level": "concat", 00:08:12.259 "superblock": true, 00:08:12.259 "num_base_bdevs": 2, 00:08:12.259 "num_base_bdevs_discovered": 2, 00:08:12.259 "num_base_bdevs_operational": 2, 00:08:12.259 "base_bdevs_list": [ 00:08:12.259 { 00:08:12.259 "name": "BaseBdev1", 00:08:12.259 "uuid": "7aa71b91-cb6c-11ee-af6b-4feeebbbadda", 00:08:12.259 "is_configured": true, 00:08:12.259 "data_offset": 2048, 00:08:12.259 "data_size": 63488 00:08:12.259 }, 00:08:12.259 { 00:08:12.259 "name": "BaseBdev2", 00:08:12.259 "uuid": "7b996e96-cb6c-11ee-af6b-4feeebbbadda", 00:08:12.259 "is_configured": true, 00:08:12.259 "data_offset": 2048, 00:08:12.259 "data_size": 63488 00:08:12.259 } 00:08:12.259 ] 00:08:12.259 }' 00:08:12.259 19:08:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:12.259 19:08:49 -- common/autotest_common.sh@10 -- # set +x 00:08:12.518 19:08:49 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:12.776 [2024-02-14 19:08:49.966421] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:12.776 [2024-02-14 19:08:49.966461] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.776 [2024-02-14 19:08:49.966479] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:12.776 19:08:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:12.777 19:08:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.777 19:08:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.035 19:08:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:13.035 "name": "Existed_Raid", 00:08:13.035 "uuid": "7b1b14d2-cb6c-11ee-af6b-4feeebbbadda", 00:08:13.035 "strip_size_kb": 64, 00:08:13.035 "state": "offline", 00:08:13.035 "raid_level": "concat", 00:08:13.035 "superblock": true, 00:08:13.035 "num_base_bdevs": 2, 00:08:13.035 "num_base_bdevs_discovered": 1, 00:08:13.035 "num_base_bdevs_operational": 1, 00:08:13.035 "base_bdevs_list": [ 00:08:13.035 { 00:08:13.035 "name": null, 00:08:13.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.035 "is_configured": false, 00:08:13.035 "data_offset": 2048, 00:08:13.035 "data_size": 63488 00:08:13.035 }, 00:08:13.035 { 00:08:13.035 "name": "BaseBdev2", 00:08:13.035 "uuid": "7b996e96-cb6c-11ee-af6b-4feeebbbadda", 00:08:13.035 "is_configured": true, 00:08:13.035 "data_offset": 2048, 00:08:13.035 "data_size": 63488 00:08:13.035 } 00:08:13.035 ] 00:08:13.035 }' 00:08:13.035 19:08:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:13.036 19:08:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.294 19:08:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:13.294 19:08:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:13.294 19:08:50 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.294 19:08:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:13.553 19:08:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:13.553 19:08:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.553 19:08:50 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:13.811 [2024-02-14 19:08:51.007859] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.811 [2024-02-14 19:08:51.007897] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b615a00 name Existed_Raid, state offline 00:08:13.811 19:08:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:13.811 19:08:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:13.811 19:08:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.811 19:08:51 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.070 19:08:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:14.070 19:08:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:14.070 19:08:51 -- bdev/bdev_raid.sh@287 -- # killprocess 49471 00:08:14.070 19:08:51 -- common/autotest_common.sh@924 -- # '[' -z 49471 ']' 00:08:14.070 19:08:51 -- common/autotest_common.sh@928 -- # kill -0 49471 00:08:14.070 19:08:51 -- common/autotest_common.sh@929 -- # uname 00:08:14.070 19:08:51 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:14.070 19:08:51 -- common/autotest_common.sh@932 -- # ps -c -o command 49471 00:08:14.070 19:08:51 -- common/autotest_common.sh@932 -- # tail -1 00:08:14.070 19:08:51 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:14.070 19:08:51 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:14.070 killing process with pid 49471 00:08:14.070 19:08:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 49471' 00:08:14.070 19:08:51 -- common/autotest_common.sh@943 -- # kill 49471 00:08:14.070 [2024-02-14 19:08:51.303829] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.070 [2024-02-14 19:08:51.303892] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.070 19:08:51 -- common/autotest_common.sh@948 -- # wait 49471 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:14.329 00:08:14.329 real 0m9.313s 00:08:14.329 user 0m15.866s 00:08:14.329 sys 0m1.927s 00:08:14.329 19:08:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:14.329 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.329 ************************************ 00:08:14.329 END TEST raid_state_function_test_sb 00:08:14.329 ************************************ 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:14.329 19:08:51 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:08:14.329 19:08:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:14.329 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.329 ************************************ 00:08:14.329 START TEST raid_superblock_test 00:08:14.329 ************************************ 00:08:14.329 19:08:51 -- common/autotest_common.sh@1102 -- # raid_superblock_test concat 2 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=49670 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49670 /var/tmp/spdk-raid.sock 00:08:14.329 19:08:51 -- common/autotest_common.sh@817 -- # '[' -z 49670 ']' 00:08:14.329 19:08:51 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:14.329 19:08:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:14.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:14.329 19:08:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:14.329 19:08:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:14.329 19:08:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:14.329 19:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:14.329 [2024-02-14 19:08:51.603725] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:14.329 [2024-02-14 19:08:51.604070] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:15.266 EAL: TSC is not safe to use in SMP mode 00:08:15.266 EAL: TSC is not invariant 00:08:15.266 [2024-02-14 19:08:52.345468] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.266 [2024-02-14 19:08:52.462567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.266 [2024-02-14 19:08:52.463101] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.266 [2024-02-14 19:08:52.463107] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.266 19:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:15.266 19:08:52 -- common/autotest_common.sh@850 -- # return 0 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:15.266 19:08:52 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:15.525 malloc1 00:08:15.525 19:08:52 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.783 [2024-02-14 19:08:53.090854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.783 [2024-02-14 19:08:53.090927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.783 [2024-02-14 19:08:53.091607] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b2d4780 00:08:15.783 [2024-02-14 19:08:53.091645] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.783 [2024-02-14 19:08:53.092876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.783 [2024-02-14 19:08:53.092906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.783 pt1 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:15.783 19:08:53 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:16.062 malloc2 00:08:16.062 19:08:53 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.325 [2024-02-14 19:08:53.646866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.325 [2024-02-14 19:08:53.646935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.325 [2024-02-14 19:08:53.646971] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b2d4c80 00:08:16.325 [2024-02-14 19:08:53.646980] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.325 [2024-02-14 19:08:53.647822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.325 [2024-02-14 19:08:53.647852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.325 pt2 00:08:16.325 19:08:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:16.325 19:08:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:16.325 19:08:53 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:08:16.583 [2024-02-14 19:08:53.934899] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:16.583 [2024-02-14 19:08:53.935654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.583 [2024-02-14 19:08:53.935716] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b2d4f00 00:08:16.584 [2024-02-14 19:08:53.935721] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:16.584 [2024-02-14 19:08:53.935761] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b337e20 00:08:16.584 [2024-02-14 19:08:53.935840] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b2d4f00 00:08:16.584 [2024-02-14 19:08:53.935844] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b2d4f00 00:08:16.584 [2024-02-14 19:08:53.935878] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.584 19:08:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.842 19:08:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:16.842 "name": "raid_bdev1", 00:08:16.842 "uuid": "7ecd3790-cb6c-11ee-af6b-4feeebbbadda", 00:08:16.842 "strip_size_kb": 64, 00:08:16.842 "state": "online", 00:08:16.842 "raid_level": "concat", 00:08:16.842 "superblock": true, 00:08:16.842 "num_base_bdevs": 2, 00:08:16.842 "num_base_bdevs_discovered": 2, 00:08:16.842 "num_base_bdevs_operational": 2, 00:08:16.842 "base_bdevs_list": [ 00:08:16.842 { 00:08:16.842 "name": "pt1", 00:08:16.842 "uuid": "8763e766-f2bc-b951-99af-cc566d1c9ce6", 00:08:16.842 "is_configured": true, 00:08:16.842 "data_offset": 2048, 00:08:16.842 "data_size": 63488 00:08:16.842 }, 00:08:16.842 { 00:08:16.842 "name": "pt2", 00:08:16.842 "uuid": "09be0075-e14c-e159-a70c-14ad2df662ca", 00:08:16.842 "is_configured": true, 00:08:16.842 "data_offset": 2048, 00:08:16.842 "data_size": 63488 00:08:16.842 } 00:08:16.842 ] 00:08:16.842 }' 00:08:16.842 19:08:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:16.842 19:08:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.472 19:08:54 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:17.472 19:08:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:17.472 [2024-02-14 19:08:54.842989] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.472 19:08:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7ecd3790-cb6c-11ee-af6b-4feeebbbadda 00:08:17.472 19:08:54 -- bdev/bdev_raid.sh@380 -- # '[' -z 7ecd3790-cb6c-11ee-af6b-4feeebbbadda ']' 00:08:17.472 19:08:54 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:17.732 [2024-02-14 19:08:55.046925] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:17.732 [2024-02-14 19:08:55.046956] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.732 [2024-02-14 19:08:55.046978] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.732 [2024-02-14 19:08:55.046992] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.732 [2024-02-14 19:08:55.046996] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2d4f00 name raid_bdev1, state offline 00:08:17.732 19:08:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:17.732 19:08:55 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.990 19:08:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:17.990 19:08:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:17.990 19:08:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:17.990 19:08:55 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:18.248 19:08:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:18.248 19:08:55 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:18.506 19:08:55 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:18.506 19:08:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:18.765 19:08:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:18.765 19:08:56 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:18.765 19:08:56 -- common/autotest_common.sh@638 -- # local es=0 00:08:18.765 19:08:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:18.765 19:08:56 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.765 19:08:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:18.765 19:08:56 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.765 19:08:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:18.765 19:08:56 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.765 19:08:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:18.765 19:08:56 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:18.765 19:08:56 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:18.765 19:08:56 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:19.025 [2024-02-14 19:08:56.250994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:19.025 [2024-02-14 19:08:56.251736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:19.025 [2024-02-14 19:08:56.251759] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:19.025 [2024-02-14 19:08:56.251797] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:19.025 [2024-02-14 19:08:56.251806] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.025 [2024-02-14 19:08:56.251810] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2d4c80 name raid_bdev1, state configuring 00:08:19.025 request: 00:08:19.025 { 00:08:19.025 "name": "raid_bdev1", 00:08:19.025 "raid_level": "concat", 00:08:19.025 "base_bdevs": [ 00:08:19.025 "malloc1", 00:08:19.025 "malloc2" 00:08:19.025 ], 00:08:19.025 "superblock": false, 00:08:19.025 "strip_size_kb": 64, 00:08:19.025 "method": "bdev_raid_create", 00:08:19.025 "req_id": 1 00:08:19.025 } 00:08:19.025 Got JSON-RPC error response 00:08:19.025 response: 00:08:19.025 { 00:08:19.025 "code": -17, 00:08:19.025 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:19.025 } 00:08:19.025 19:08:56 -- common/autotest_common.sh@641 -- # es=1 00:08:19.025 19:08:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:19.025 19:08:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:19.025 19:08:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:19.025 19:08:56 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.025 19:08:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:19.284 [2024-02-14 19:08:56.638996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:19.284 [2024-02-14 19:08:56.639060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.284 [2024-02-14 19:08:56.639096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b2d4780 00:08:19.284 [2024-02-14 19:08:56.639103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.284 [2024-02-14 19:08:56.639906] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.284 [2024-02-14 19:08:56.639939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:19.284 [2024-02-14 19:08:56.639961] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:19.284 [2024-02-14 19:08:56.639972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:19.284 pt1 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.284 19:08:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.543 19:08:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:19.543 "name": "raid_bdev1", 00:08:19.543 "uuid": "7ecd3790-cb6c-11ee-af6b-4feeebbbadda", 00:08:19.543 "strip_size_kb": 64, 00:08:19.543 "state": "configuring", 00:08:19.543 "raid_level": "concat", 00:08:19.543 "superblock": true, 00:08:19.543 "num_base_bdevs": 2, 00:08:19.543 "num_base_bdevs_discovered": 1, 00:08:19.543 "num_base_bdevs_operational": 2, 00:08:19.543 "base_bdevs_list": [ 00:08:19.543 { 00:08:19.543 "name": "pt1", 00:08:19.543 "uuid": "8763e766-f2bc-b951-99af-cc566d1c9ce6", 00:08:19.543 "is_configured": true, 00:08:19.543 "data_offset": 2048, 00:08:19.543 "data_size": 63488 00:08:19.543 }, 00:08:19.543 { 00:08:19.543 "name": null, 00:08:19.543 "uuid": "09be0075-e14c-e159-a70c-14ad2df662ca", 00:08:19.543 "is_configured": false, 00:08:19.543 "data_offset": 2048, 00:08:19.543 "data_size": 63488 00:08:19.543 } 00:08:19.543 ] 00:08:19.543 }' 00:08:19.543 19:08:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:19.543 19:08:56 -- common/autotest_common.sh@10 -- # set +x 00:08:19.801 19:08:57 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:08:19.801 19:08:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:19.801 19:08:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:19.801 19:08:57 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.059 [2024-02-14 19:08:57.359033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.059 [2024-02-14 19:08:57.359096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.059 [2024-02-14 19:08:57.359130] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b2d4f00 00:08:20.059 [2024-02-14 19:08:57.359138] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.059 [2024-02-14 19:08:57.359264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.059 [2024-02-14 19:08:57.359272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.059 [2024-02-14 19:08:57.359292] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:20.059 [2024-02-14 19:08:57.359300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.059 [2024-02-14 19:08:57.359325] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b2d5180 00:08:20.059 [2024-02-14 19:08:57.359328] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:20.059 [2024-02-14 19:08:57.359344] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b337e20 00:08:20.059 [2024-02-14 19:08:57.359389] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b2d5180 00:08:20.059 [2024-02-14 19:08:57.359392] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b2d5180 00:08:20.059 [2024-02-14 19:08:57.359408] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.059 pt2 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.059 19:08:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.317 19:08:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:20.317 "name": "raid_bdev1", 00:08:20.317 "uuid": "7ecd3790-cb6c-11ee-af6b-4feeebbbadda", 00:08:20.317 "strip_size_kb": 64, 00:08:20.317 "state": "online", 00:08:20.317 "raid_level": "concat", 00:08:20.317 "superblock": true, 00:08:20.317 "num_base_bdevs": 2, 00:08:20.317 "num_base_bdevs_discovered": 2, 00:08:20.317 "num_base_bdevs_operational": 2, 00:08:20.317 "base_bdevs_list": [ 00:08:20.317 { 00:08:20.317 "name": "pt1", 00:08:20.317 "uuid": "8763e766-f2bc-b951-99af-cc566d1c9ce6", 00:08:20.317 "is_configured": true, 00:08:20.317 "data_offset": 2048, 00:08:20.317 "data_size": 63488 00:08:20.317 }, 00:08:20.317 { 00:08:20.317 "name": "pt2", 00:08:20.317 "uuid": "09be0075-e14c-e159-a70c-14ad2df662ca", 00:08:20.317 "is_configured": true, 00:08:20.317 "data_offset": 2048, 00:08:20.317 "data_size": 63488 00:08:20.317 } 00:08:20.317 ] 00:08:20.317 }' 00:08:20.317 19:08:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:20.317 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:20.576 19:08:57 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:20.576 19:08:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:20.835 [2024-02-14 19:08:58.139099] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.835 19:08:58 -- bdev/bdev_raid.sh@430 -- # '[' 7ecd3790-cb6c-11ee-af6b-4feeebbbadda '!=' 7ecd3790-cb6c-11ee-af6b-4feeebbbadda ']' 00:08:20.835 19:08:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:08:20.835 19:08:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:20.835 19:08:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:20.835 19:08:58 -- bdev/bdev_raid.sh@511 -- # killprocess 49670 00:08:20.835 19:08:58 -- common/autotest_common.sh@924 -- # '[' -z 49670 ']' 00:08:20.835 19:08:58 -- common/autotest_common.sh@928 -- # kill -0 49670 00:08:20.835 19:08:58 -- common/autotest_common.sh@929 -- # uname 00:08:20.835 19:08:58 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:20.835 19:08:58 -- common/autotest_common.sh@932 -- # ps -c -o command 49670 00:08:20.835 19:08:58 -- common/autotest_common.sh@932 -- # tail -1 00:08:20.835 19:08:58 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:20.835 19:08:58 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:20.835 killing process with pid 49670 00:08:20.835 19:08:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 49670' 00:08:20.835 19:08:58 -- common/autotest_common.sh@943 -- # kill 49670 00:08:20.835 [2024-02-14 19:08:58.175280] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.835 19:08:58 -- common/autotest_common.sh@948 -- # wait 49670 00:08:20.835 [2024-02-14 19:08:58.175319] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.835 [2024-02-14 19:08:58.175335] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.835 [2024-02-14 19:08:58.175340] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b2d5180 name raid_bdev1, state offline 00:08:20.835 [2024-02-14 19:08:58.193956] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:21.094 00:08:21.094 real 0m6.837s 00:08:21.094 user 0m11.168s 00:08:21.094 sys 0m1.719s 00:08:21.094 19:08:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.094 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.094 ************************************ 00:08:21.094 END TEST raid_superblock_test 00:08:21.094 ************************************ 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:21.094 19:08:58 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:08:21.094 19:08:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:21.094 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.094 ************************************ 00:08:21.094 START TEST raid_state_function_test 00:08:21.094 ************************************ 00:08:21.094 19:08:58 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 2 false 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@226 -- # raid_pid=49815 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49815' 00:08:21.094 Process raid pid: 49815 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49815 /var/tmp/spdk-raid.sock 00:08:21.094 19:08:58 -- common/autotest_common.sh@817 -- # '[' -z 49815 ']' 00:08:21.094 19:08:58 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:21.094 19:08:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:21.094 19:08:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:21.094 19:08:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:21.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:21.094 19:08:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:21.094 19:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:21.094 [2024-02-14 19:08:58.489874] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:21.094 [2024-02-14 19:08:58.490220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:22.030 EAL: TSC is not safe to use in SMP mode 00:08:22.030 EAL: TSC is not invariant 00:08:22.030 [2024-02-14 19:08:59.254234] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.030 [2024-02-14 19:08:59.380798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.030 [2024-02-14 19:08:59.381265] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.030 [2024-02-14 19:08:59.381269] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.288 19:08:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:22.288 19:08:59 -- common/autotest_common.sh@850 -- # return 0 00:08:22.288 19:08:59 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:22.546 [2024-02-14 19:08:59.744677] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.546 [2024-02-14 19:08:59.744753] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.546 [2024-02-14 19:08:59.744758] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.546 [2024-02-14 19:08:59.744766] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.546 19:08:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:22.804 19:09:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:22.804 "name": "Existed_Raid", 00:08:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.804 "strip_size_kb": 0, 00:08:22.804 "state": "configuring", 00:08:22.804 "raid_level": "raid1", 00:08:22.804 "superblock": false, 00:08:22.804 "num_base_bdevs": 2, 00:08:22.804 "num_base_bdevs_discovered": 0, 00:08:22.804 "num_base_bdevs_operational": 2, 00:08:22.804 "base_bdevs_list": [ 00:08:22.804 { 00:08:22.804 "name": "BaseBdev1", 00:08:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.804 "is_configured": false, 00:08:22.804 "data_offset": 0, 00:08:22.804 "data_size": 0 00:08:22.804 }, 00:08:22.804 { 00:08:22.805 "name": "BaseBdev2", 00:08:22.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.805 "is_configured": false, 00:08:22.805 "data_offset": 0, 00:08:22.805 "data_size": 0 00:08:22.805 } 00:08:22.805 ] 00:08:22.805 }' 00:08:22.805 19:09:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:22.805 19:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:23.063 19:09:00 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:23.063 [2024-02-14 19:09:00.448693] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.063 [2024-02-14 19:09:00.448730] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d461500 name Existed_Raid, state configuring 00:08:23.063 19:09:00 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:23.321 [2024-02-14 19:09:00.644719] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.321 [2024-02-14 19:09:00.644785] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.321 [2024-02-14 19:09:00.644790] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.321 [2024-02-14 19:09:00.644798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.321 19:09:00 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.579 [2024-02-14 19:09:00.894001] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.579 BaseBdev1 00:08:23.579 19:09:00 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:23.579 19:09:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:23.579 19:09:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:23.579 19:09:00 -- common/autotest_common.sh@887 -- # local i 00:08:23.579 19:09:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:23.579 19:09:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:23.579 19:09:00 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:23.837 19:09:01 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.095 [ 00:08:24.095 { 00:08:24.095 "name": "BaseBdev1", 00:08:24.095 "aliases": [ 00:08:24.095 "82f2e6ab-cb6c-11ee-af6b-4feeebbbadda" 00:08:24.095 ], 00:08:24.095 "product_name": "Malloc disk", 00:08:24.095 "block_size": 512, 00:08:24.095 "num_blocks": 65536, 00:08:24.095 "uuid": "82f2e6ab-cb6c-11ee-af6b-4feeebbbadda", 00:08:24.095 "assigned_rate_limits": { 00:08:24.095 "rw_ios_per_sec": 0, 00:08:24.095 "rw_mbytes_per_sec": 0, 00:08:24.095 "r_mbytes_per_sec": 0, 00:08:24.095 "w_mbytes_per_sec": 0 00:08:24.095 }, 00:08:24.095 "claimed": true, 00:08:24.095 "claim_type": "exclusive_write", 00:08:24.095 "zoned": false, 00:08:24.095 "supported_io_types": { 00:08:24.095 "read": true, 00:08:24.095 "write": true, 00:08:24.095 "unmap": true, 00:08:24.095 "write_zeroes": true, 00:08:24.095 "flush": true, 00:08:24.095 "reset": true, 00:08:24.095 "compare": false, 00:08:24.095 "compare_and_write": false, 00:08:24.095 "abort": true, 00:08:24.095 "nvme_admin": false, 00:08:24.095 "nvme_io": false 00:08:24.095 }, 00:08:24.095 "memory_domains": [ 00:08:24.095 { 00:08:24.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.095 "dma_device_type": 2 00:08:24.095 } 00:08:24.095 ], 00:08:24.095 "driver_specific": {} 00:08:24.095 } 00:08:24.095 ] 00:08:24.095 19:09:01 -- common/autotest_common.sh@893 -- # return 0 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.095 19:09:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.361 19:09:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:24.361 "name": "Existed_Raid", 00:08:24.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.361 "strip_size_kb": 0, 00:08:24.361 "state": "configuring", 00:08:24.361 "raid_level": "raid1", 00:08:24.361 "superblock": false, 00:08:24.361 "num_base_bdevs": 2, 00:08:24.361 "num_base_bdevs_discovered": 1, 00:08:24.361 "num_base_bdevs_operational": 2, 00:08:24.361 "base_bdevs_list": [ 00:08:24.361 { 00:08:24.361 "name": "BaseBdev1", 00:08:24.361 "uuid": "82f2e6ab-cb6c-11ee-af6b-4feeebbbadda", 00:08:24.361 "is_configured": true, 00:08:24.361 "data_offset": 0, 00:08:24.361 "data_size": 65536 00:08:24.361 }, 00:08:24.361 { 00:08:24.361 "name": "BaseBdev2", 00:08:24.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.361 "is_configured": false, 00:08:24.361 "data_offset": 0, 00:08:24.361 "data_size": 0 00:08:24.361 } 00:08:24.361 ] 00:08:24.361 }' 00:08:24.361 19:09:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:24.361 19:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:24.661 19:09:01 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:24.918 [2024-02-14 19:09:02.124779] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.918 [2024-02-14 19:09:02.124837] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d461500 name Existed_Raid, state configuring 00:08:24.918 19:09:02 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:24.918 19:09:02 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:25.177 [2024-02-14 19:09:02.400794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.177 [2024-02-14 19:09:02.401899] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.177 [2024-02-14 19:09:02.401952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.177 19:09:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.434 19:09:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:25.434 "name": "Existed_Raid", 00:08:25.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.434 "strip_size_kb": 0, 00:08:25.434 "state": "configuring", 00:08:25.434 "raid_level": "raid1", 00:08:25.434 "superblock": false, 00:08:25.434 "num_base_bdevs": 2, 00:08:25.434 "num_base_bdevs_discovered": 1, 00:08:25.434 "num_base_bdevs_operational": 2, 00:08:25.434 "base_bdevs_list": [ 00:08:25.434 { 00:08:25.434 "name": "BaseBdev1", 00:08:25.434 "uuid": "82f2e6ab-cb6c-11ee-af6b-4feeebbbadda", 00:08:25.434 "is_configured": true, 00:08:25.434 "data_offset": 0, 00:08:25.434 "data_size": 65536 00:08:25.434 }, 00:08:25.434 { 00:08:25.434 "name": "BaseBdev2", 00:08:25.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.434 "is_configured": false, 00:08:25.434 "data_offset": 0, 00:08:25.435 "data_size": 0 00:08:25.435 } 00:08:25.435 ] 00:08:25.435 }' 00:08:25.435 19:09:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:25.435 19:09:02 -- common/autotest_common.sh@10 -- # set +x 00:08:26.001 19:09:03 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.001 [2024-02-14 19:09:03.329005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.001 [2024-02-14 19:09:03.329037] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d461a00 00:08:26.001 [2024-02-14 19:09:03.329042] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:26.001 [2024-02-14 19:09:03.329063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d4c4ec0 00:08:26.001 [2024-02-14 19:09:03.329177] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d461a00 00:08:26.001 [2024-02-14 19:09:03.329182] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d461a00 00:08:26.001 [2024-02-14 19:09:03.329217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.001 BaseBdev2 00:08:26.001 19:09:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:26.001 19:09:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:08:26.001 19:09:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:26.001 19:09:03 -- common/autotest_common.sh@887 -- # local i 00:08:26.001 19:09:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:26.001 19:09:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:26.001 19:09:03 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:26.258 19:09:03 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.517 [ 00:08:26.517 { 00:08:26.517 "name": "BaseBdev2", 00:08:26.517 "aliases": [ 00:08:26.517 "84669f0c-cb6c-11ee-af6b-4feeebbbadda" 00:08:26.517 ], 00:08:26.517 "product_name": "Malloc disk", 00:08:26.517 "block_size": 512, 00:08:26.517 "num_blocks": 65536, 00:08:26.517 "uuid": "84669f0c-cb6c-11ee-af6b-4feeebbbadda", 00:08:26.517 "assigned_rate_limits": { 00:08:26.517 "rw_ios_per_sec": 0, 00:08:26.517 "rw_mbytes_per_sec": 0, 00:08:26.517 "r_mbytes_per_sec": 0, 00:08:26.517 "w_mbytes_per_sec": 0 00:08:26.517 }, 00:08:26.517 "claimed": true, 00:08:26.517 "claim_type": "exclusive_write", 00:08:26.517 "zoned": false, 00:08:26.517 "supported_io_types": { 00:08:26.517 "read": true, 00:08:26.517 "write": true, 00:08:26.517 "unmap": true, 00:08:26.517 "write_zeroes": true, 00:08:26.517 "flush": true, 00:08:26.517 "reset": true, 00:08:26.517 "compare": false, 00:08:26.517 "compare_and_write": false, 00:08:26.517 "abort": true, 00:08:26.517 "nvme_admin": false, 00:08:26.517 "nvme_io": false 00:08:26.517 }, 00:08:26.517 "memory_domains": [ 00:08:26.517 { 00:08:26.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.517 "dma_device_type": 2 00:08:26.517 } 00:08:26.517 ], 00:08:26.517 "driver_specific": {} 00:08:26.517 } 00:08:26.517 ] 00:08:26.517 19:09:03 -- common/autotest_common.sh@893 -- # return 0 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.517 19:09:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.775 19:09:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:26.775 "name": "Existed_Raid", 00:08:26.775 "uuid": "8466a6f0-cb6c-11ee-af6b-4feeebbbadda", 00:08:26.775 "strip_size_kb": 0, 00:08:26.775 "state": "online", 00:08:26.775 "raid_level": "raid1", 00:08:26.775 "superblock": false, 00:08:26.775 "num_base_bdevs": 2, 00:08:26.775 "num_base_bdevs_discovered": 2, 00:08:26.775 "num_base_bdevs_operational": 2, 00:08:26.775 "base_bdevs_list": [ 00:08:26.775 { 00:08:26.775 "name": "BaseBdev1", 00:08:26.775 "uuid": "82f2e6ab-cb6c-11ee-af6b-4feeebbbadda", 00:08:26.775 "is_configured": true, 00:08:26.775 "data_offset": 0, 00:08:26.775 "data_size": 65536 00:08:26.775 }, 00:08:26.775 { 00:08:26.775 "name": "BaseBdev2", 00:08:26.775 "uuid": "84669f0c-cb6c-11ee-af6b-4feeebbbadda", 00:08:26.775 "is_configured": true, 00:08:26.775 "data_offset": 0, 00:08:26.775 "data_size": 65536 00:08:26.775 } 00:08:26.775 ] 00:08:26.775 }' 00:08:26.775 19:09:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:26.775 19:09:04 -- common/autotest_common.sh@10 -- # set +x 00:08:27.033 19:09:04 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:27.291 [2024-02-14 19:09:04.608933] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.291 19:09:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.549 19:09:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:27.549 "name": "Existed_Raid", 00:08:27.549 "uuid": "8466a6f0-cb6c-11ee-af6b-4feeebbbadda", 00:08:27.549 "strip_size_kb": 0, 00:08:27.549 "state": "online", 00:08:27.549 "raid_level": "raid1", 00:08:27.549 "superblock": false, 00:08:27.549 "num_base_bdevs": 2, 00:08:27.549 "num_base_bdevs_discovered": 1, 00:08:27.549 "num_base_bdevs_operational": 1, 00:08:27.549 "base_bdevs_list": [ 00:08:27.549 { 00:08:27.549 "name": null, 00:08:27.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.549 "is_configured": false, 00:08:27.549 "data_offset": 0, 00:08:27.549 "data_size": 65536 00:08:27.549 }, 00:08:27.549 { 00:08:27.549 "name": "BaseBdev2", 00:08:27.549 "uuid": "84669f0c-cb6c-11ee-af6b-4feeebbbadda", 00:08:27.549 "is_configured": true, 00:08:27.549 "data_offset": 0, 00:08:27.549 "data_size": 65536 00:08:27.549 } 00:08:27.549 ] 00:08:27.549 }' 00:08:27.549 19:09:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:27.549 19:09:04 -- common/autotest_common.sh@10 -- # set +x 00:08:28.114 19:09:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:28.114 19:09:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:28.114 19:09:05 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.114 19:09:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:28.372 19:09:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:28.372 19:09:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.372 19:09:05 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:28.938 [2024-02-14 19:09:06.082333] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.938 [2024-02-14 19:09:06.082372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.938 [2024-02-14 19:09:06.082389] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.938 [2024-02-14 19:09:06.091913] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.938 [2024-02-14 19:09:06.091945] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d461a00 name Existed_Raid, state offline 00:08:28.938 19:09:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:28.938 19:09:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:28.938 19:09:06 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.938 19:09:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.196 19:09:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:29.196 19:09:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:29.196 19:09:06 -- bdev/bdev_raid.sh@287 -- # killprocess 49815 00:08:29.196 19:09:06 -- common/autotest_common.sh@924 -- # '[' -z 49815 ']' 00:08:29.196 19:09:06 -- common/autotest_common.sh@928 -- # kill -0 49815 00:08:29.196 19:09:06 -- common/autotest_common.sh@929 -- # uname 00:08:29.196 19:09:06 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:29.196 19:09:06 -- common/autotest_common.sh@932 -- # ps -c -o command 49815 00:08:29.196 19:09:06 -- common/autotest_common.sh@932 -- # tail -1 00:08:29.196 19:09:06 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:29.196 19:09:06 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:29.196 killing process with pid 49815 00:08:29.196 19:09:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 49815' 00:08:29.196 19:09:06 -- common/autotest_common.sh@943 -- # kill 49815 00:08:29.196 [2024-02-14 19:09:06.514300] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.196 19:09:06 -- common/autotest_common.sh@948 -- # wait 49815 00:08:29.196 [2024-02-14 19:09:06.514366] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:29.455 00:08:29.455 real 0m8.282s 00:08:29.455 user 0m13.973s 00:08:29.455 sys 0m1.737s 00:08:29.455 19:09:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.455 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.455 ************************************ 00:08:29.455 END TEST raid_state_function_test 00:08:29.455 ************************************ 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:29.455 19:09:06 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:08:29.455 19:09:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:29.455 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.455 ************************************ 00:08:29.455 START TEST raid_state_function_test_sb 00:08:29.455 ************************************ 00:08:29.455 19:09:06 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 2 true 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=50011 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50011' 00:08:29.455 Process raid pid: 50011 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:29.455 19:09:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50011 /var/tmp/spdk-raid.sock 00:08:29.455 19:09:06 -- common/autotest_common.sh@817 -- # '[' -z 50011 ']' 00:08:29.455 19:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:29.455 19:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:29.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:29.455 19:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:29.455 19:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:29.455 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.455 [2024-02-14 19:09:06.816285] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:29.455 [2024-02-14 19:09:06.816547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:30.392 EAL: TSC is not safe to use in SMP mode 00:08:30.392 EAL: TSC is not invariant 00:08:30.392 [2024-02-14 19:09:07.598907] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.392 [2024-02-14 19:09:07.713121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.392 [2024-02-14 19:09:07.713659] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.392 [2024-02-14 19:09:07.713663] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.651 19:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:30.651 19:09:07 -- common/autotest_common.sh@850 -- # return 0 00:08:30.651 19:09:07 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:30.651 [2024-02-14 19:09:08.040836] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.651 [2024-02-14 19:09:08.040893] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.651 [2024-02-14 19:09:08.040898] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.651 [2024-02-14 19:09:08.040905] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.651 19:09:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.911 19:09:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:30.911 "name": "Existed_Raid", 00:08:30.911 "uuid": "87359ce2-cb6c-11ee-af6b-4feeebbbadda", 00:08:30.911 "strip_size_kb": 0, 00:08:30.911 "state": "configuring", 00:08:30.911 "raid_level": "raid1", 00:08:30.911 "superblock": true, 00:08:30.911 "num_base_bdevs": 2, 00:08:30.911 "num_base_bdevs_discovered": 0, 00:08:30.911 "num_base_bdevs_operational": 2, 00:08:30.911 "base_bdevs_list": [ 00:08:30.911 { 00:08:30.911 "name": "BaseBdev1", 00:08:30.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.911 "is_configured": false, 00:08:30.911 "data_offset": 0, 00:08:30.911 "data_size": 0 00:08:30.911 }, 00:08:30.911 { 00:08:30.911 "name": "BaseBdev2", 00:08:30.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.911 "is_configured": false, 00:08:30.911 "data_offset": 0, 00:08:30.911 "data_size": 0 00:08:30.911 } 00:08:30.911 ] 00:08:30.911 }' 00:08:30.911 19:09:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:30.911 19:09:08 -- common/autotest_common.sh@10 -- # set +x 00:08:31.169 19:09:08 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:31.428 [2024-02-14 19:09:08.796850] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.428 [2024-02-14 19:09:08.796876] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d851500 name Existed_Raid, state configuring 00:08:31.428 19:09:08 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:31.687 [2024-02-14 19:09:09.056864] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.687 [2024-02-14 19:09:09.056906] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.687 [2024-02-14 19:09:09.056909] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.687 [2024-02-14 19:09:09.056916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.687 19:09:09 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:31.946 [2024-02-14 19:09:09.254092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.946 BaseBdev1 00:08:31.946 19:09:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:31.946 19:09:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:31.946 19:09:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:31.946 19:09:09 -- common/autotest_common.sh@887 -- # local i 00:08:31.946 19:09:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:31.946 19:09:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:31.946 19:09:09 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:32.205 19:09:09 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.464 [ 00:08:32.464 { 00:08:32.464 "name": "BaseBdev1", 00:08:32.464 "aliases": [ 00:08:32.464 "87ee8e9c-cb6c-11ee-af6b-4feeebbbadda" 00:08:32.464 ], 00:08:32.464 "product_name": "Malloc disk", 00:08:32.464 "block_size": 512, 00:08:32.464 "num_blocks": 65536, 00:08:32.464 "uuid": "87ee8e9c-cb6c-11ee-af6b-4feeebbbadda", 00:08:32.464 "assigned_rate_limits": { 00:08:32.464 "rw_ios_per_sec": 0, 00:08:32.464 "rw_mbytes_per_sec": 0, 00:08:32.464 "r_mbytes_per_sec": 0, 00:08:32.464 "w_mbytes_per_sec": 0 00:08:32.464 }, 00:08:32.464 "claimed": true, 00:08:32.464 "claim_type": "exclusive_write", 00:08:32.464 "zoned": false, 00:08:32.464 "supported_io_types": { 00:08:32.464 "read": true, 00:08:32.464 "write": true, 00:08:32.464 "unmap": true, 00:08:32.464 "write_zeroes": true, 00:08:32.464 "flush": true, 00:08:32.464 "reset": true, 00:08:32.464 "compare": false, 00:08:32.464 "compare_and_write": false, 00:08:32.464 "abort": true, 00:08:32.464 "nvme_admin": false, 00:08:32.464 "nvme_io": false 00:08:32.464 }, 00:08:32.464 "memory_domains": [ 00:08:32.464 { 00:08:32.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.464 "dma_device_type": 2 00:08:32.464 } 00:08:32.464 ], 00:08:32.464 "driver_specific": {} 00:08:32.464 } 00:08:32.464 ] 00:08:32.464 19:09:09 -- common/autotest_common.sh@893 -- # return 0 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.464 19:09:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.723 19:09:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:32.723 "name": "Existed_Raid", 00:08:32.723 "uuid": "87d0a5a3-cb6c-11ee-af6b-4feeebbbadda", 00:08:32.723 "strip_size_kb": 0, 00:08:32.723 "state": "configuring", 00:08:32.723 "raid_level": "raid1", 00:08:32.723 "superblock": true, 00:08:32.723 "num_base_bdevs": 2, 00:08:32.723 "num_base_bdevs_discovered": 1, 00:08:32.723 "num_base_bdevs_operational": 2, 00:08:32.723 "base_bdevs_list": [ 00:08:32.723 { 00:08:32.723 "name": "BaseBdev1", 00:08:32.723 "uuid": "87ee8e9c-cb6c-11ee-af6b-4feeebbbadda", 00:08:32.723 "is_configured": true, 00:08:32.723 "data_offset": 2048, 00:08:32.723 "data_size": 63488 00:08:32.723 }, 00:08:32.723 { 00:08:32.723 "name": "BaseBdev2", 00:08:32.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.723 "is_configured": false, 00:08:32.723 "data_offset": 0, 00:08:32.723 "data_size": 0 00:08:32.723 } 00:08:32.723 ] 00:08:32.723 }' 00:08:32.723 19:09:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:32.723 19:09:09 -- common/autotest_common.sh@10 -- # set +x 00:08:32.982 19:09:10 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:33.241 [2024-02-14 19:09:10.400913] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.241 [2024-02-14 19:09:10.400942] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d851500 name Existed_Raid, state configuring 00:08:33.241 19:09:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:33.241 19:09:10 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:33.501 19:09:10 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.759 BaseBdev1 00:08:33.759 19:09:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:33.759 19:09:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:33.759 19:09:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:33.759 19:09:10 -- common/autotest_common.sh@887 -- # local i 00:08:33.759 19:09:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:33.759 19:09:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:33.759 19:09:10 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:34.018 19:09:11 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:34.018 [ 00:08:34.018 { 00:08:34.018 "name": "BaseBdev1", 00:08:34.018 "aliases": [ 00:08:34.018 "88f04986-cb6c-11ee-af6b-4feeebbbadda" 00:08:34.018 ], 00:08:34.018 "product_name": "Malloc disk", 00:08:34.018 "block_size": 512, 00:08:34.018 "num_blocks": 65536, 00:08:34.018 "uuid": "88f04986-cb6c-11ee-af6b-4feeebbbadda", 00:08:34.018 "assigned_rate_limits": { 00:08:34.018 "rw_ios_per_sec": 0, 00:08:34.018 "rw_mbytes_per_sec": 0, 00:08:34.018 "r_mbytes_per_sec": 0, 00:08:34.018 "w_mbytes_per_sec": 0 00:08:34.018 }, 00:08:34.018 "claimed": false, 00:08:34.018 "zoned": false, 00:08:34.018 "supported_io_types": { 00:08:34.018 "read": true, 00:08:34.018 "write": true, 00:08:34.018 "unmap": true, 00:08:34.018 "write_zeroes": true, 00:08:34.018 "flush": true, 00:08:34.018 "reset": true, 00:08:34.018 "compare": false, 00:08:34.018 "compare_and_write": false, 00:08:34.018 "abort": true, 00:08:34.018 "nvme_admin": false, 00:08:34.018 "nvme_io": false 00:08:34.018 }, 00:08:34.018 "memory_domains": [ 00:08:34.018 { 00:08:34.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.018 "dma_device_type": 2 00:08:34.018 } 00:08:34.018 ], 00:08:34.018 "driver_specific": {} 00:08:34.018 } 00:08:34.018 ] 00:08:34.018 19:09:11 -- common/autotest_common.sh@893 -- # return 0 00:08:34.018 19:09:11 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:34.276 [2024-02-14 19:09:11.561970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.276 [2024-02-14 19:09:11.562641] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.276 [2024-02-14 19:09:11.562684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.276 19:09:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:34.276 19:09:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.277 19:09:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.535 19:09:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:34.535 "name": "Existed_Raid", 00:08:34.535 "uuid": "894ee528-cb6c-11ee-af6b-4feeebbbadda", 00:08:34.535 "strip_size_kb": 0, 00:08:34.535 "state": "configuring", 00:08:34.535 "raid_level": "raid1", 00:08:34.535 "superblock": true, 00:08:34.535 "num_base_bdevs": 2, 00:08:34.535 "num_base_bdevs_discovered": 1, 00:08:34.535 "num_base_bdevs_operational": 2, 00:08:34.535 "base_bdevs_list": [ 00:08:34.535 { 00:08:34.535 "name": "BaseBdev1", 00:08:34.535 "uuid": "88f04986-cb6c-11ee-af6b-4feeebbbadda", 00:08:34.535 "is_configured": true, 00:08:34.535 "data_offset": 2048, 00:08:34.535 "data_size": 63488 00:08:34.535 }, 00:08:34.535 { 00:08:34.535 "name": "BaseBdev2", 00:08:34.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.535 "is_configured": false, 00:08:34.535 "data_offset": 0, 00:08:34.535 "data_size": 0 00:08:34.535 } 00:08:34.535 ] 00:08:34.535 }' 00:08:34.535 19:09:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:34.535 19:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:34.794 19:09:12 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.053 [2024-02-14 19:09:12.350124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.053 [2024-02-14 19:09:12.350187] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d851a00 00:08:35.053 [2024-02-14 19:09:12.350192] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.053 [2024-02-14 19:09:12.350209] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d8b4ec0 00:08:35.053 [2024-02-14 19:09:12.350247] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d851a00 00:08:35.053 [2024-02-14 19:09:12.350250] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d851a00 00:08:35.053 [2024-02-14 19:09:12.350264] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.053 BaseBdev2 00:08:35.053 19:09:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:35.053 19:09:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:08:35.053 19:09:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:35.053 19:09:12 -- common/autotest_common.sh@887 -- # local i 00:08:35.053 19:09:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:35.053 19:09:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:35.053 19:09:12 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:35.311 19:09:12 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.641 [ 00:08:35.641 { 00:08:35.641 "name": "BaseBdev2", 00:08:35.641 "aliases": [ 00:08:35.641 "89c72393-cb6c-11ee-af6b-4feeebbbadda" 00:08:35.641 ], 00:08:35.641 "product_name": "Malloc disk", 00:08:35.641 "block_size": 512, 00:08:35.641 "num_blocks": 65536, 00:08:35.641 "uuid": "89c72393-cb6c-11ee-af6b-4feeebbbadda", 00:08:35.641 "assigned_rate_limits": { 00:08:35.641 "rw_ios_per_sec": 0, 00:08:35.641 "rw_mbytes_per_sec": 0, 00:08:35.641 "r_mbytes_per_sec": 0, 00:08:35.641 "w_mbytes_per_sec": 0 00:08:35.641 }, 00:08:35.641 "claimed": true, 00:08:35.641 "claim_type": "exclusive_write", 00:08:35.641 "zoned": false, 00:08:35.641 "supported_io_types": { 00:08:35.641 "read": true, 00:08:35.641 "write": true, 00:08:35.641 "unmap": true, 00:08:35.641 "write_zeroes": true, 00:08:35.641 "flush": true, 00:08:35.641 "reset": true, 00:08:35.641 "compare": false, 00:08:35.641 "compare_and_write": false, 00:08:35.641 "abort": true, 00:08:35.641 "nvme_admin": false, 00:08:35.641 "nvme_io": false 00:08:35.641 }, 00:08:35.641 "memory_domains": [ 00:08:35.641 { 00:08:35.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.641 "dma_device_type": 2 00:08:35.641 } 00:08:35.641 ], 00:08:35.641 "driver_specific": {} 00:08:35.641 } 00:08:35.641 ] 00:08:35.641 19:09:12 -- common/autotest_common.sh@893 -- # return 0 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:35.641 "name": "Existed_Raid", 00:08:35.641 "uuid": "894ee528-cb6c-11ee-af6b-4feeebbbadda", 00:08:35.641 "strip_size_kb": 0, 00:08:35.641 "state": "online", 00:08:35.641 "raid_level": "raid1", 00:08:35.641 "superblock": true, 00:08:35.641 "num_base_bdevs": 2, 00:08:35.641 "num_base_bdevs_discovered": 2, 00:08:35.641 "num_base_bdevs_operational": 2, 00:08:35.641 "base_bdevs_list": [ 00:08:35.641 { 00:08:35.641 "name": "BaseBdev1", 00:08:35.641 "uuid": "88f04986-cb6c-11ee-af6b-4feeebbbadda", 00:08:35.641 "is_configured": true, 00:08:35.641 "data_offset": 2048, 00:08:35.641 "data_size": 63488 00:08:35.641 }, 00:08:35.641 { 00:08:35.641 "name": "BaseBdev2", 00:08:35.641 "uuid": "89c72393-cb6c-11ee-af6b-4feeebbbadda", 00:08:35.641 "is_configured": true, 00:08:35.641 "data_offset": 2048, 00:08:35.641 "data_size": 63488 00:08:35.641 } 00:08:35.641 ] 00:08:35.641 }' 00:08:35.641 19:09:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:35.641 19:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:35.926 19:09:13 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:36.194 [2024-02-14 19:09:13.498036] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.194 19:09:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.453 19:09:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:36.453 "name": "Existed_Raid", 00:08:36.453 "uuid": "894ee528-cb6c-11ee-af6b-4feeebbbadda", 00:08:36.453 "strip_size_kb": 0, 00:08:36.453 "state": "online", 00:08:36.453 "raid_level": "raid1", 00:08:36.453 "superblock": true, 00:08:36.453 "num_base_bdevs": 2, 00:08:36.453 "num_base_bdevs_discovered": 1, 00:08:36.453 "num_base_bdevs_operational": 1, 00:08:36.453 "base_bdevs_list": [ 00:08:36.453 { 00:08:36.453 "name": null, 00:08:36.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.453 "is_configured": false, 00:08:36.453 "data_offset": 2048, 00:08:36.453 "data_size": 63488 00:08:36.453 }, 00:08:36.453 { 00:08:36.453 "name": "BaseBdev2", 00:08:36.453 "uuid": "89c72393-cb6c-11ee-af6b-4feeebbbadda", 00:08:36.453 "is_configured": true, 00:08:36.453 "data_offset": 2048, 00:08:36.453 "data_size": 63488 00:08:36.453 } 00:08:36.453 ] 00:08:36.453 }' 00:08:36.453 19:09:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:36.453 19:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:36.712 19:09:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:36.712 19:09:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:36.712 19:09:14 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.712 19:09:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:36.970 19:09:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:36.970 19:09:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.970 19:09:14 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:37.228 [2024-02-14 19:09:14.527071] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.228 [2024-02-14 19:09:14.527091] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.228 [2024-02-14 19:09:14.527101] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.228 [2024-02-14 19:09:14.536106] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.228 [2024-02-14 19:09:14.536119] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d851a00 name Existed_Raid, state offline 00:08:37.228 19:09:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:37.228 19:09:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:37.228 19:09:14 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.228 19:09:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.486 19:09:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:37.486 19:09:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:37.486 19:09:14 -- bdev/bdev_raid.sh@287 -- # killprocess 50011 00:08:37.486 19:09:14 -- common/autotest_common.sh@924 -- # '[' -z 50011 ']' 00:08:37.486 19:09:14 -- common/autotest_common.sh@928 -- # kill -0 50011 00:08:37.486 19:09:14 -- common/autotest_common.sh@929 -- # uname 00:08:37.486 19:09:14 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:37.486 19:09:14 -- common/autotest_common.sh@932 -- # ps -c -o command 50011 00:08:37.486 19:09:14 -- common/autotest_common.sh@932 -- # tail -1 00:08:37.486 19:09:14 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:37.486 killing process with pid 50011 00:08:37.486 19:09:14 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:37.486 19:09:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 50011' 00:08:37.486 19:09:14 -- common/autotest_common.sh@943 -- # kill 50011 00:08:37.486 [2024-02-14 19:09:14.794367] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.486 19:09:14 -- common/autotest_common.sh@948 -- # wait 50011 00:08:37.486 [2024-02-14 19:09:14.794411] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.744 19:09:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:37.744 00:08:37.744 real 0m8.225s 00:08:37.744 user 0m13.715s 00:08:37.744 sys 0m1.909s 00:08:37.744 19:09:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.744 ************************************ 00:08:37.744 END TEST raid_state_function_test_sb 00:08:37.745 ************************************ 00:08:37.745 19:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:37.745 19:09:15 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:08:37.745 19:09:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:37.745 19:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:37.745 ************************************ 00:08:37.745 START TEST raid_superblock_test 00:08:37.745 ************************************ 00:08:37.745 19:09:15 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid1 2 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=50210 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50210 /var/tmp/spdk-raid.sock 00:08:37.745 19:09:15 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:37.745 19:09:15 -- common/autotest_common.sh@817 -- # '[' -z 50210 ']' 00:08:37.745 19:09:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:37.745 19:09:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:37.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:37.745 19:09:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:37.745 19:09:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:37.745 19:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:37.745 [2024-02-14 19:09:15.080427] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:37.745 [2024-02-14 19:09:15.080732] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:38.677 EAL: TSC is not safe to use in SMP mode 00:08:38.677 EAL: TSC is not invariant 00:08:38.677 [2024-02-14 19:09:15.820811] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.677 [2024-02-14 19:09:15.932884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.677 [2024-02-14 19:09:15.933355] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.677 [2024-02-14 19:09:15.933364] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.677 19:09:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:38.677 19:09:16 -- common/autotest_common.sh@850 -- # return 0 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:38.677 19:09:16 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:38.935 malloc1 00:08:38.935 19:09:16 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:39.193 [2024-02-14 19:09:16.452139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:39.193 [2024-02-14 19:09:16.452207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.193 [2024-02-14 19:09:16.452870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82acae780 00:08:39.193 [2024-02-14 19:09:16.452899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.193 [2024-02-14 19:09:16.453945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.193 [2024-02-14 19:09:16.453975] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:39.193 pt1 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:39.193 19:09:16 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:39.450 malloc2 00:08:39.451 19:09:16 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:39.708 [2024-02-14 19:09:16.944159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:39.708 [2024-02-14 19:09:16.944218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.708 [2024-02-14 19:09:16.944248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82acaec80 00:08:39.708 [2024-02-14 19:09:16.944255] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.708 [2024-02-14 19:09:16.944972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.708 [2024-02-14 19:09:16.944999] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:39.708 pt2 00:08:39.708 19:09:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:39.708 19:09:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:39.708 19:09:16 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:39.965 [2024-02-14 19:09:17.132169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:39.965 [2024-02-14 19:09:17.132806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:39.965 [2024-02-14 19:09:17.132860] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82acaef00 00:08:39.965 [2024-02-14 19:09:17.132865] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:39.965 [2024-02-14 19:09:17.132907] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ad11e20 00:08:39.965 [2024-02-14 19:09:17.132971] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82acaef00 00:08:39.965 [2024-02-14 19:09:17.132974] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82acaef00 00:08:39.965 [2024-02-14 19:09:17.132993] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.965 19:09:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:39.965 "name": "raid_bdev1", 00:08:39.965 "uuid": "8ca0d72e-cb6c-11ee-af6b-4feeebbbadda", 00:08:39.965 "strip_size_kb": 0, 00:08:39.965 "state": "online", 00:08:39.965 "raid_level": "raid1", 00:08:39.966 "superblock": true, 00:08:39.966 "num_base_bdevs": 2, 00:08:39.966 "num_base_bdevs_discovered": 2, 00:08:39.966 "num_base_bdevs_operational": 2, 00:08:39.966 "base_bdevs_list": [ 00:08:39.966 { 00:08:39.966 "name": "pt1", 00:08:39.966 "uuid": "822a97c9-49a7-4f58-ace5-d6f0f9d3bfef", 00:08:39.966 "is_configured": true, 00:08:39.966 "data_offset": 2048, 00:08:39.966 "data_size": 63488 00:08:39.966 }, 00:08:39.966 { 00:08:39.966 "name": "pt2", 00:08:39.966 "uuid": "ee6670a3-c3fd-a056-9cb5-8f73eace05ea", 00:08:39.966 "is_configured": true, 00:08:39.966 "data_offset": 2048, 00:08:39.966 "data_size": 63488 00:08:39.966 } 00:08:39.966 ] 00:08:39.966 }' 00:08:39.966 19:09:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:39.966 19:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:40.224 19:09:17 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:40.224 19:09:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:40.483 [2024-02-14 19:09:17.776201] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.483 19:09:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=8ca0d72e-cb6c-11ee-af6b-4feeebbbadda 00:08:40.483 19:09:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 8ca0d72e-cb6c-11ee-af6b-4feeebbbadda ']' 00:08:40.483 19:09:17 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:40.741 [2024-02-14 19:09:18.028170] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.741 [2024-02-14 19:09:18.028191] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.741 [2024-02-14 19:09:18.028207] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.741 [2024-02-14 19:09:18.028219] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.741 [2024-02-14 19:09:18.028238] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acaef00 name raid_bdev1, state offline 00:08:40.741 19:09:18 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.741 19:09:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:40.999 19:09:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:40.999 19:09:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:40.999 19:09:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:40.999 19:09:18 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:41.257 19:09:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.257 19:09:18 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:41.515 19:09:18 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:41.515 19:09:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:41.774 19:09:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:41.774 19:09:18 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:41.774 19:09:18 -- common/autotest_common.sh@638 -- # local es=0 00:08:41.774 19:09:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:41.774 19:09:18 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.774 19:09:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.774 19:09:18 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.774 19:09:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.774 19:09:18 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.774 19:09:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.774 19:09:18 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.774 19:09:18 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:41.774 19:09:18 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:41.774 [2024-02-14 19:09:19.172216] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:41.774 [2024-02-14 19:09:19.172944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:41.774 [2024-02-14 19:09:19.172960] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:41.774 [2024-02-14 19:09:19.172999] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:41.774 [2024-02-14 19:09:19.173008] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.774 [2024-02-14 19:09:19.173012] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acaec80 name raid_bdev1, state configuring 00:08:41.774 request: 00:08:41.774 { 00:08:41.774 "name": "raid_bdev1", 00:08:41.774 "raid_level": "raid1", 00:08:41.774 "base_bdevs": [ 00:08:41.774 "malloc1", 00:08:41.774 "malloc2" 00:08:41.774 ], 00:08:41.774 "superblock": false, 00:08:41.774 "method": "bdev_raid_create", 00:08:41.774 "req_id": 1 00:08:41.774 } 00:08:41.774 Got JSON-RPC error response 00:08:41.774 response: 00:08:41.774 { 00:08:41.774 "code": -17, 00:08:41.774 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:41.774 } 00:08:41.774 19:09:19 -- common/autotest_common.sh@641 -- # es=1 00:08:41.774 19:09:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:41.774 19:09:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:42.033 19:09:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:42.033 19:09:19 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.033 19:09:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:42.033 19:09:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:42.033 19:09:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:42.033 19:09:19 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.291 [2024-02-14 19:09:19.540228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.291 [2024-02-14 19:09:19.540279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.291 [2024-02-14 19:09:19.540311] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82acae780 00:08:42.291 [2024-02-14 19:09:19.540318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.291 [2024-02-14 19:09:19.541085] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.291 [2024-02-14 19:09:19.541109] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.291 [2024-02-14 19:09:19.541127] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:42.291 [2024-02-14 19:09:19.541137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.292 pt1 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.292 19:09:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.551 19:09:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:42.551 "name": "raid_bdev1", 00:08:42.551 "uuid": "8ca0d72e-cb6c-11ee-af6b-4feeebbbadda", 00:08:42.551 "strip_size_kb": 0, 00:08:42.551 "state": "configuring", 00:08:42.551 "raid_level": "raid1", 00:08:42.551 "superblock": true, 00:08:42.551 "num_base_bdevs": 2, 00:08:42.551 "num_base_bdevs_discovered": 1, 00:08:42.551 "num_base_bdevs_operational": 2, 00:08:42.551 "base_bdevs_list": [ 00:08:42.551 { 00:08:42.551 "name": "pt1", 00:08:42.551 "uuid": "822a97c9-49a7-4f58-ace5-d6f0f9d3bfef", 00:08:42.551 "is_configured": true, 00:08:42.551 "data_offset": 2048, 00:08:42.551 "data_size": 63488 00:08:42.551 }, 00:08:42.551 { 00:08:42.551 "name": null, 00:08:42.551 "uuid": "ee6670a3-c3fd-a056-9cb5-8f73eace05ea", 00:08:42.551 "is_configured": false, 00:08:42.551 "data_offset": 2048, 00:08:42.551 "data_size": 63488 00:08:42.551 } 00:08:42.551 ] 00:08:42.551 }' 00:08:42.551 19:09:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:42.551 19:09:19 -- common/autotest_common.sh@10 -- # set +x 00:08:42.810 19:09:20 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:08:42.810 19:09:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:42.810 19:09:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:42.810 19:09:20 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.069 [2024-02-14 19:09:20.304259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.069 [2024-02-14 19:09:20.304312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.069 [2024-02-14 19:09:20.304348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82acaef00 00:08:43.069 [2024-02-14 19:09:20.304355] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.069 [2024-02-14 19:09:20.304453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.069 [2024-02-14 19:09:20.304460] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.069 [2024-02-14 19:09:20.304475] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:43.069 [2024-02-14 19:09:20.304481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.069 [2024-02-14 19:09:20.304502] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82acaf180 00:08:43.069 [2024-02-14 19:09:20.304505] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.069 [2024-02-14 19:09:20.304521] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ad11e20 00:08:43.069 [2024-02-14 19:09:20.304584] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82acaf180 00:08:43.069 [2024-02-14 19:09:20.304588] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82acaf180 00:08:43.069 [2024-02-14 19:09:20.304605] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.069 pt2 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.069 19:09:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.329 19:09:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:43.329 "name": "raid_bdev1", 00:08:43.329 "uuid": "8ca0d72e-cb6c-11ee-af6b-4feeebbbadda", 00:08:43.329 "strip_size_kb": 0, 00:08:43.329 "state": "online", 00:08:43.329 "raid_level": "raid1", 00:08:43.329 "superblock": true, 00:08:43.329 "num_base_bdevs": 2, 00:08:43.329 "num_base_bdevs_discovered": 2, 00:08:43.329 "num_base_bdevs_operational": 2, 00:08:43.329 "base_bdevs_list": [ 00:08:43.329 { 00:08:43.329 "name": "pt1", 00:08:43.329 "uuid": "822a97c9-49a7-4f58-ace5-d6f0f9d3bfef", 00:08:43.329 "is_configured": true, 00:08:43.329 "data_offset": 2048, 00:08:43.329 "data_size": 63488 00:08:43.329 }, 00:08:43.329 { 00:08:43.329 "name": "pt2", 00:08:43.329 "uuid": "ee6670a3-c3fd-a056-9cb5-8f73eace05ea", 00:08:43.329 "is_configured": true, 00:08:43.329 "data_offset": 2048, 00:08:43.329 "data_size": 63488 00:08:43.329 } 00:08:43.329 ] 00:08:43.329 }' 00:08:43.329 19:09:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:43.329 19:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:43.587 19:09:20 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:43.588 19:09:20 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:43.847 [2024-02-14 19:09:21.052301] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@430 -- # '[' 8ca0d72e-cb6c-11ee-af6b-4feeebbbadda '!=' 8ca0d72e-cb6c-11ee-af6b-4feeebbbadda ']' 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:43.847 [2024-02-14 19:09:21.240286] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.847 19:09:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.106 19:09:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:44.106 "name": "raid_bdev1", 00:08:44.106 "uuid": "8ca0d72e-cb6c-11ee-af6b-4feeebbbadda", 00:08:44.106 "strip_size_kb": 0, 00:08:44.106 "state": "online", 00:08:44.106 "raid_level": "raid1", 00:08:44.106 "superblock": true, 00:08:44.106 "num_base_bdevs": 2, 00:08:44.106 "num_base_bdevs_discovered": 1, 00:08:44.106 "num_base_bdevs_operational": 1, 00:08:44.106 "base_bdevs_list": [ 00:08:44.106 { 00:08:44.106 "name": null, 00:08:44.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.106 "is_configured": false, 00:08:44.106 "data_offset": 2048, 00:08:44.106 "data_size": 63488 00:08:44.106 }, 00:08:44.106 { 00:08:44.106 "name": "pt2", 00:08:44.106 "uuid": "ee6670a3-c3fd-a056-9cb5-8f73eace05ea", 00:08:44.106 "is_configured": true, 00:08:44.106 "data_offset": 2048, 00:08:44.106 "data_size": 63488 00:08:44.106 } 00:08:44.106 ] 00:08:44.106 }' 00:08:44.106 19:09:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:44.106 19:09:21 -- common/autotest_common.sh@10 -- # set +x 00:08:44.365 19:09:21 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:44.623 [2024-02-14 19:09:21.912322] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.623 [2024-02-14 19:09:21.912345] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.623 [2024-02-14 19:09:21.912365] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.623 [2024-02-14 19:09:21.912374] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.623 [2024-02-14 19:09:21.912378] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acaf180 name raid_bdev1, state offline 00:08:44.623 19:09:21 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.623 19:09:21 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:08:44.882 19:09:22 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:08:44.882 19:09:22 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:08:44.882 19:09:22 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:08:44.882 19:09:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:44.882 19:09:22 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@462 -- # i=1 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.245 [2024-02-14 19:09:22.564354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.245 [2024-02-14 19:09:22.564434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.245 [2024-02-14 19:09:22.564465] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82acaef00 00:08:45.245 [2024-02-14 19:09:22.564474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.245 [2024-02-14 19:09:22.565267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.245 [2024-02-14 19:09:22.565295] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.245 [2024-02-14 19:09:22.565323] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:45.245 [2024-02-14 19:09:22.565334] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.245 [2024-02-14 19:09:22.565366] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82acaf180 00:08:45.245 [2024-02-14 19:09:22.565370] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.245 [2024-02-14 19:09:22.565389] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ad11e20 00:08:45.245 [2024-02-14 19:09:22.565430] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82acaf180 00:08:45.245 [2024-02-14 19:09:22.565433] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82acaf180 00:08:45.245 [2024-02-14 19:09:22.565452] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.245 pt2 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.245 19:09:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.517 19:09:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:45.517 "name": "raid_bdev1", 00:08:45.517 "uuid": "8ca0d72e-cb6c-11ee-af6b-4feeebbbadda", 00:08:45.517 "strip_size_kb": 0, 00:08:45.517 "state": "online", 00:08:45.517 "raid_level": "raid1", 00:08:45.517 "superblock": true, 00:08:45.517 "num_base_bdevs": 2, 00:08:45.517 "num_base_bdevs_discovered": 1, 00:08:45.517 "num_base_bdevs_operational": 1, 00:08:45.517 "base_bdevs_list": [ 00:08:45.517 { 00:08:45.517 "name": null, 00:08:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.517 "is_configured": false, 00:08:45.517 "data_offset": 2048, 00:08:45.517 "data_size": 63488 00:08:45.517 }, 00:08:45.517 { 00:08:45.517 "name": "pt2", 00:08:45.517 "uuid": "ee6670a3-c3fd-a056-9cb5-8f73eace05ea", 00:08:45.517 "is_configured": true, 00:08:45.517 "data_offset": 2048, 00:08:45.517 "data_size": 63488 00:08:45.517 } 00:08:45.517 ] 00:08:45.517 }' 00:08:45.517 19:09:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:45.517 19:09:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.776 19:09:23 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:08:45.776 19:09:23 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:45.776 19:09:23 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:08:46.115 [2024-02-14 19:09:23.348415] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.115 19:09:23 -- bdev/bdev_raid.sh@506 -- # '[' 8ca0d72e-cb6c-11ee-af6b-4feeebbbadda '!=' 8ca0d72e-cb6c-11ee-af6b-4feeebbbadda ']' 00:08:46.115 19:09:23 -- bdev/bdev_raid.sh@511 -- # killprocess 50210 00:08:46.115 19:09:23 -- common/autotest_common.sh@924 -- # '[' -z 50210 ']' 00:08:46.115 19:09:23 -- common/autotest_common.sh@928 -- # kill -0 50210 00:08:46.115 19:09:23 -- common/autotest_common.sh@929 -- # uname 00:08:46.115 19:09:23 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:46.115 19:09:23 -- common/autotest_common.sh@932 -- # ps -c -o command 50210 00:08:46.115 19:09:23 -- common/autotest_common.sh@932 -- # tail -1 00:08:46.115 19:09:23 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:46.115 19:09:23 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:46.115 killing process with pid 50210 00:08:46.115 19:09:23 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 50210' 00:08:46.115 19:09:23 -- common/autotest_common.sh@943 -- # kill 50210 00:08:46.115 [2024-02-14 19:09:23.387414] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.115 19:09:23 -- common/autotest_common.sh@948 -- # wait 50210 00:08:46.115 [2024-02-14 19:09:23.387442] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.116 [2024-02-14 19:09:23.387453] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.116 [2024-02-14 19:09:23.387457] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82acaf180 name raid_bdev1, state offline 00:08:46.116 [2024-02-14 19:09:23.405900] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:46.387 00:08:46.387 real 0m8.568s 00:08:46.387 user 0m14.303s 00:08:46.387 sys 0m2.031s 00:08:46.387 19:09:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:46.387 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.387 ************************************ 00:08:46.387 END TEST raid_superblock_test 00:08:46.387 ************************************ 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:46.387 19:09:23 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:08:46.387 19:09:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:46.387 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.387 ************************************ 00:08:46.387 START TEST raid_state_function_test 00:08:46.387 ************************************ 00:08:46.387 19:09:23 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 3 false 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=50425 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50425' 00:08:46.387 Process raid pid: 50425 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50425 /var/tmp/spdk-raid.sock 00:08:46.387 19:09:23 -- common/autotest_common.sh@817 -- # '[' -z 50425 ']' 00:08:46.387 19:09:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:46.387 19:09:23 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:46.387 19:09:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:46.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:46.387 19:09:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:46.387 19:09:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:46.387 19:09:23 -- common/autotest_common.sh@10 -- # set +x 00:08:46.387 [2024-02-14 19:09:23.700694] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:46.387 [2024-02-14 19:09:23.701004] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:47.334 EAL: TSC is not safe to use in SMP mode 00:08:47.334 EAL: TSC is not invariant 00:08:47.334 [2024-02-14 19:09:24.468973] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.334 [2024-02-14 19:09:24.583864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.334 [2024-02-14 19:09:24.584408] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.334 [2024-02-14 19:09:24.584413] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.659 19:09:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:47.659 19:09:24 -- common/autotest_common.sh@850 -- # return 0 00:08:47.659 19:09:24 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:47.659 [2024-02-14 19:09:24.999490] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.659 [2024-02-14 19:09:24.999554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.659 [2024-02-14 19:09:24.999559] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.659 [2024-02-14 19:09:24.999566] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.659 [2024-02-14 19:09:24.999569] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.659 [2024-02-14 19:09:24.999576] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.659 19:09:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:47.984 19:09:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:47.984 "name": "Existed_Raid", 00:08:47.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.984 "strip_size_kb": 64, 00:08:47.984 "state": "configuring", 00:08:47.984 "raid_level": "raid0", 00:08:47.984 "superblock": false, 00:08:47.984 "num_base_bdevs": 3, 00:08:47.984 "num_base_bdevs_discovered": 0, 00:08:47.984 "num_base_bdevs_operational": 3, 00:08:47.984 "base_bdevs_list": [ 00:08:47.984 { 00:08:47.984 "name": "BaseBdev1", 00:08:47.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.984 "is_configured": false, 00:08:47.984 "data_offset": 0, 00:08:47.984 "data_size": 0 00:08:47.984 }, 00:08:47.984 { 00:08:47.984 "name": "BaseBdev2", 00:08:47.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.984 "is_configured": false, 00:08:47.984 "data_offset": 0, 00:08:47.984 "data_size": 0 00:08:47.984 }, 00:08:47.984 { 00:08:47.984 "name": "BaseBdev3", 00:08:47.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.984 "is_configured": false, 00:08:47.984 "data_offset": 0, 00:08:47.984 "data_size": 0 00:08:47.984 } 00:08:47.984 ] 00:08:47.984 }' 00:08:47.984 19:09:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:47.984 19:09:25 -- common/autotest_common.sh@10 -- # set +x 00:08:48.243 19:09:25 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:48.500 [2024-02-14 19:09:25.727487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.501 [2024-02-14 19:09:25.727514] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8d1500 name Existed_Raid, state configuring 00:08:48.501 19:09:25 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:48.501 [2024-02-14 19:09:25.903499] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.501 [2024-02-14 19:09:25.903545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.501 [2024-02-14 19:09:25.903549] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.501 [2024-02-14 19:09:25.903556] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.501 [2024-02-14 19:09:25.903559] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.501 [2024-02-14 19:09:25.903565] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.759 19:09:25 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.759 [2024-02-14 19:09:26.080740] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.759 BaseBdev1 00:08:48.759 19:09:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:48.759 19:09:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:48.759 19:09:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:48.759 19:09:26 -- common/autotest_common.sh@887 -- # local i 00:08:48.759 19:09:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:48.759 19:09:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:48.759 19:09:26 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:49.018 19:09:26 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.276 [ 00:08:49.276 { 00:08:49.276 "name": "BaseBdev1", 00:08:49.276 "aliases": [ 00:08:49.276 "91f6195d-cb6c-11ee-af6b-4feeebbbadda" 00:08:49.276 ], 00:08:49.276 "product_name": "Malloc disk", 00:08:49.276 "block_size": 512, 00:08:49.276 "num_blocks": 65536, 00:08:49.276 "uuid": "91f6195d-cb6c-11ee-af6b-4feeebbbadda", 00:08:49.276 "assigned_rate_limits": { 00:08:49.276 "rw_ios_per_sec": 0, 00:08:49.276 "rw_mbytes_per_sec": 0, 00:08:49.276 "r_mbytes_per_sec": 0, 00:08:49.276 "w_mbytes_per_sec": 0 00:08:49.276 }, 00:08:49.276 "claimed": true, 00:08:49.276 "claim_type": "exclusive_write", 00:08:49.276 "zoned": false, 00:08:49.276 "supported_io_types": { 00:08:49.276 "read": true, 00:08:49.276 "write": true, 00:08:49.276 "unmap": true, 00:08:49.276 "write_zeroes": true, 00:08:49.276 "flush": true, 00:08:49.276 "reset": true, 00:08:49.276 "compare": false, 00:08:49.276 "compare_and_write": false, 00:08:49.276 "abort": true, 00:08:49.276 "nvme_admin": false, 00:08:49.276 "nvme_io": false 00:08:49.276 }, 00:08:49.276 "memory_domains": [ 00:08:49.276 { 00:08:49.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.276 "dma_device_type": 2 00:08:49.276 } 00:08:49.276 ], 00:08:49.276 "driver_specific": {} 00:08:49.276 } 00:08:49.276 ] 00:08:49.276 19:09:26 -- common/autotest_common.sh@893 -- # return 0 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.276 19:09:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.534 19:09:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:49.534 "name": "Existed_Raid", 00:08:49.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.534 "strip_size_kb": 64, 00:08:49.534 "state": "configuring", 00:08:49.534 "raid_level": "raid0", 00:08:49.534 "superblock": false, 00:08:49.534 "num_base_bdevs": 3, 00:08:49.534 "num_base_bdevs_discovered": 1, 00:08:49.534 "num_base_bdevs_operational": 3, 00:08:49.534 "base_bdevs_list": [ 00:08:49.534 { 00:08:49.534 "name": "BaseBdev1", 00:08:49.534 "uuid": "91f6195d-cb6c-11ee-af6b-4feeebbbadda", 00:08:49.534 "is_configured": true, 00:08:49.534 "data_offset": 0, 00:08:49.534 "data_size": 65536 00:08:49.534 }, 00:08:49.534 { 00:08:49.534 "name": "BaseBdev2", 00:08:49.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.534 "is_configured": false, 00:08:49.534 "data_offset": 0, 00:08:49.534 "data_size": 0 00:08:49.534 }, 00:08:49.534 { 00:08:49.534 "name": "BaseBdev3", 00:08:49.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.534 "is_configured": false, 00:08:49.534 "data_offset": 0, 00:08:49.534 "data_size": 0 00:08:49.534 } 00:08:49.534 ] 00:08:49.534 }' 00:08:49.534 19:09:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:49.534 19:09:26 -- common/autotest_common.sh@10 -- # set +x 00:08:49.534 19:09:26 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:49.793 [2024-02-14 19:09:27.111545] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.793 [2024-02-14 19:09:27.111577] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8d1500 name Existed_Raid, state configuring 00:08:49.793 19:09:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:49.793 19:09:27 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:50.052 [2024-02-14 19:09:27.363577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.052 [2024-02-14 19:09:27.364552] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.052 [2024-02-14 19:09:27.364595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.052 [2024-02-14 19:09:27.364599] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.052 [2024-02-14 19:09:27.364606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.052 19:09:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.310 19:09:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:50.310 "name": "Existed_Raid", 00:08:50.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.310 "strip_size_kb": 64, 00:08:50.310 "state": "configuring", 00:08:50.310 "raid_level": "raid0", 00:08:50.311 "superblock": false, 00:08:50.311 "num_base_bdevs": 3, 00:08:50.311 "num_base_bdevs_discovered": 1, 00:08:50.311 "num_base_bdevs_operational": 3, 00:08:50.311 "base_bdevs_list": [ 00:08:50.311 { 00:08:50.311 "name": "BaseBdev1", 00:08:50.311 "uuid": "91f6195d-cb6c-11ee-af6b-4feeebbbadda", 00:08:50.311 "is_configured": true, 00:08:50.311 "data_offset": 0, 00:08:50.311 "data_size": 65536 00:08:50.311 }, 00:08:50.311 { 00:08:50.311 "name": "BaseBdev2", 00:08:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.311 "is_configured": false, 00:08:50.311 "data_offset": 0, 00:08:50.311 "data_size": 0 00:08:50.311 }, 00:08:50.311 { 00:08:50.311 "name": "BaseBdev3", 00:08:50.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.311 "is_configured": false, 00:08:50.311 "data_offset": 0, 00:08:50.311 "data_size": 0 00:08:50.311 } 00:08:50.311 ] 00:08:50.311 }' 00:08:50.311 19:09:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:50.311 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:08:50.569 19:09:27 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.828 [2024-02-14 19:09:28.015759] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.828 BaseBdev2 00:08:50.828 19:09:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:50.828 19:09:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:08:50.828 19:09:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:50.828 19:09:28 -- common/autotest_common.sh@887 -- # local i 00:08:50.828 19:09:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:50.828 19:09:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:50.828 19:09:28 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:51.086 19:09:28 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.086 [ 00:08:51.086 { 00:08:51.086 "name": "BaseBdev2", 00:08:51.086 "aliases": [ 00:08:51.086 "931d861e-cb6c-11ee-af6b-4feeebbbadda" 00:08:51.086 ], 00:08:51.086 "product_name": "Malloc disk", 00:08:51.086 "block_size": 512, 00:08:51.086 "num_blocks": 65536, 00:08:51.086 "uuid": "931d861e-cb6c-11ee-af6b-4feeebbbadda", 00:08:51.086 "assigned_rate_limits": { 00:08:51.086 "rw_ios_per_sec": 0, 00:08:51.086 "rw_mbytes_per_sec": 0, 00:08:51.086 "r_mbytes_per_sec": 0, 00:08:51.086 "w_mbytes_per_sec": 0 00:08:51.086 }, 00:08:51.086 "claimed": true, 00:08:51.086 "claim_type": "exclusive_write", 00:08:51.086 "zoned": false, 00:08:51.086 "supported_io_types": { 00:08:51.086 "read": true, 00:08:51.086 "write": true, 00:08:51.086 "unmap": true, 00:08:51.086 "write_zeroes": true, 00:08:51.086 "flush": true, 00:08:51.086 "reset": true, 00:08:51.086 "compare": false, 00:08:51.086 "compare_and_write": false, 00:08:51.086 "abort": true, 00:08:51.086 "nvme_admin": false, 00:08:51.086 "nvme_io": false 00:08:51.086 }, 00:08:51.086 "memory_domains": [ 00:08:51.086 { 00:08:51.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.086 "dma_device_type": 2 00:08:51.086 } 00:08:51.086 ], 00:08:51.086 "driver_specific": {} 00:08:51.086 } 00:08:51.086 ] 00:08:51.086 19:09:28 -- common/autotest_common.sh@893 -- # return 0 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:51.086 19:09:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.087 19:09:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.390 19:09:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:51.390 "name": "Existed_Raid", 00:08:51.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.390 "strip_size_kb": 64, 00:08:51.390 "state": "configuring", 00:08:51.390 "raid_level": "raid0", 00:08:51.390 "superblock": false, 00:08:51.390 "num_base_bdevs": 3, 00:08:51.390 "num_base_bdevs_discovered": 2, 00:08:51.390 "num_base_bdevs_operational": 3, 00:08:51.390 "base_bdevs_list": [ 00:08:51.390 { 00:08:51.390 "name": "BaseBdev1", 00:08:51.390 "uuid": "91f6195d-cb6c-11ee-af6b-4feeebbbadda", 00:08:51.390 "is_configured": true, 00:08:51.390 "data_offset": 0, 00:08:51.390 "data_size": 65536 00:08:51.390 }, 00:08:51.390 { 00:08:51.390 "name": "BaseBdev2", 00:08:51.390 "uuid": "931d861e-cb6c-11ee-af6b-4feeebbbadda", 00:08:51.390 "is_configured": true, 00:08:51.390 "data_offset": 0, 00:08:51.390 "data_size": 65536 00:08:51.390 }, 00:08:51.390 { 00:08:51.390 "name": "BaseBdev3", 00:08:51.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.390 "is_configured": false, 00:08:51.390 "data_offset": 0, 00:08:51.390 "data_size": 0 00:08:51.390 } 00:08:51.390 ] 00:08:51.390 }' 00:08:51.390 19:09:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:51.390 19:09:28 -- common/autotest_common.sh@10 -- # set +x 00:08:51.649 19:09:28 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.907 [2024-02-14 19:09:29.211806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.907 [2024-02-14 19:09:29.211834] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b8d1a00 00:08:51.907 [2024-02-14 19:09:29.211838] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:51.907 [2024-02-14 19:09:29.211864] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b934ec0 00:08:51.907 [2024-02-14 19:09:29.211964] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b8d1a00 00:08:51.907 [2024-02-14 19:09:29.211968] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b8d1a00 00:08:51.907 [2024-02-14 19:09:29.211996] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.907 BaseBdev3 00:08:51.907 19:09:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:51.907 19:09:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:08:51.907 19:09:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:51.907 19:09:29 -- common/autotest_common.sh@887 -- # local i 00:08:51.907 19:09:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:51.907 19:09:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:51.907 19:09:29 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:52.166 19:09:29 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:52.426 [ 00:08:52.426 { 00:08:52.426 "name": "BaseBdev3", 00:08:52.426 "aliases": [ 00:08:52.426 "93d4075a-cb6c-11ee-af6b-4feeebbbadda" 00:08:52.426 ], 00:08:52.426 "product_name": "Malloc disk", 00:08:52.426 "block_size": 512, 00:08:52.426 "num_blocks": 65536, 00:08:52.426 "uuid": "93d4075a-cb6c-11ee-af6b-4feeebbbadda", 00:08:52.426 "assigned_rate_limits": { 00:08:52.426 "rw_ios_per_sec": 0, 00:08:52.426 "rw_mbytes_per_sec": 0, 00:08:52.426 "r_mbytes_per_sec": 0, 00:08:52.426 "w_mbytes_per_sec": 0 00:08:52.426 }, 00:08:52.426 "claimed": true, 00:08:52.426 "claim_type": "exclusive_write", 00:08:52.426 "zoned": false, 00:08:52.426 "supported_io_types": { 00:08:52.426 "read": true, 00:08:52.426 "write": true, 00:08:52.426 "unmap": true, 00:08:52.426 "write_zeroes": true, 00:08:52.426 "flush": true, 00:08:52.426 "reset": true, 00:08:52.426 "compare": false, 00:08:52.426 "compare_and_write": false, 00:08:52.426 "abort": true, 00:08:52.426 "nvme_admin": false, 00:08:52.426 "nvme_io": false 00:08:52.426 }, 00:08:52.426 "memory_domains": [ 00:08:52.426 { 00:08:52.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.426 "dma_device_type": 2 00:08:52.426 } 00:08:52.426 ], 00:08:52.426 "driver_specific": {} 00:08:52.426 } 00:08:52.426 ] 00:08:52.426 19:09:29 -- common/autotest_common.sh@893 -- # return 0 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.426 19:09:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.686 19:09:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:52.686 "name": "Existed_Raid", 00:08:52.686 "uuid": "93d40d9b-cb6c-11ee-af6b-4feeebbbadda", 00:08:52.686 "strip_size_kb": 64, 00:08:52.686 "state": "online", 00:08:52.686 "raid_level": "raid0", 00:08:52.686 "superblock": false, 00:08:52.686 "num_base_bdevs": 3, 00:08:52.686 "num_base_bdevs_discovered": 3, 00:08:52.686 "num_base_bdevs_operational": 3, 00:08:52.686 "base_bdevs_list": [ 00:08:52.686 { 00:08:52.686 "name": "BaseBdev1", 00:08:52.686 "uuid": "91f6195d-cb6c-11ee-af6b-4feeebbbadda", 00:08:52.686 "is_configured": true, 00:08:52.686 "data_offset": 0, 00:08:52.686 "data_size": 65536 00:08:52.686 }, 00:08:52.686 { 00:08:52.686 "name": "BaseBdev2", 00:08:52.686 "uuid": "931d861e-cb6c-11ee-af6b-4feeebbbadda", 00:08:52.686 "is_configured": true, 00:08:52.686 "data_offset": 0, 00:08:52.686 "data_size": 65536 00:08:52.686 }, 00:08:52.686 { 00:08:52.686 "name": "BaseBdev3", 00:08:52.686 "uuid": "93d4075a-cb6c-11ee-af6b-4feeebbbadda", 00:08:52.686 "is_configured": true, 00:08:52.686 "data_offset": 0, 00:08:52.686 "data_size": 65536 00:08:52.686 } 00:08:52.686 ] 00:08:52.686 }' 00:08:52.686 19:09:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:52.686 19:09:29 -- common/autotest_common.sh@10 -- # set +x 00:08:52.945 19:09:30 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:53.204 [2024-02-14 19:09:30.375757] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.204 [2024-02-14 19:09:30.375785] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.204 [2024-02-14 19:09:30.375799] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:53.204 19:09:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:53.205 "name": "Existed_Raid", 00:08:53.205 "uuid": "93d40d9b-cb6c-11ee-af6b-4feeebbbadda", 00:08:53.205 "strip_size_kb": 64, 00:08:53.205 "state": "offline", 00:08:53.205 "raid_level": "raid0", 00:08:53.205 "superblock": false, 00:08:53.205 "num_base_bdevs": 3, 00:08:53.205 "num_base_bdevs_discovered": 2, 00:08:53.205 "num_base_bdevs_operational": 2, 00:08:53.205 "base_bdevs_list": [ 00:08:53.205 { 00:08:53.205 "name": null, 00:08:53.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.205 "is_configured": false, 00:08:53.205 "data_offset": 0, 00:08:53.205 "data_size": 65536 00:08:53.205 }, 00:08:53.205 { 00:08:53.205 "name": "BaseBdev2", 00:08:53.205 "uuid": "931d861e-cb6c-11ee-af6b-4feeebbbadda", 00:08:53.205 "is_configured": true, 00:08:53.205 "data_offset": 0, 00:08:53.205 "data_size": 65536 00:08:53.205 }, 00:08:53.205 { 00:08:53.205 "name": "BaseBdev3", 00:08:53.205 "uuid": "93d4075a-cb6c-11ee-af6b-4feeebbbadda", 00:08:53.205 "is_configured": true, 00:08:53.205 "data_offset": 0, 00:08:53.205 "data_size": 65536 00:08:53.205 } 00:08:53.205 ] 00:08:53.205 }' 00:08:53.205 19:09:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:53.205 19:09:30 -- common/autotest_common.sh@10 -- # set +x 00:08:53.464 19:09:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:53.464 19:09:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:53.464 19:09:30 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.464 19:09:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:53.724 19:09:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:53.724 19:09:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.724 19:09:31 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:53.983 [2024-02-14 19:09:31.224717] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.983 19:09:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:53.983 19:09:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:53.983 19:09:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:53.983 19:09:31 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.242 19:09:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:54.242 19:09:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:54.242 19:09:31 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:54.501 [2024-02-14 19:09:31.689720] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:54.501 [2024-02-14 19:09:31.689752] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b8d1a00 name Existed_Raid, state offline 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:54.501 19:09:31 -- bdev/bdev_raid.sh@287 -- # killprocess 50425 00:08:54.501 19:09:31 -- common/autotest_common.sh@924 -- # '[' -z 50425 ']' 00:08:54.501 19:09:31 -- common/autotest_common.sh@928 -- # kill -0 50425 00:08:54.501 19:09:31 -- common/autotest_common.sh@929 -- # uname 00:08:54.501 19:09:31 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:08:54.501 19:09:31 -- common/autotest_common.sh@932 -- # ps -c -o command 50425 00:08:54.501 19:09:31 -- common/autotest_common.sh@932 -- # tail -1 00:08:54.502 19:09:31 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:08:54.502 19:09:31 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:08:54.502 killing process with pid 50425 00:08:54.502 19:09:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 50425' 00:08:54.502 19:09:31 -- common/autotest_common.sh@943 -- # kill 50425 00:08:54.502 [2024-02-14 19:09:31.912345] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.502 [2024-02-14 19:09:31.912390] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.502 19:09:31 -- common/autotest_common.sh@948 -- # wait 50425 00:08:54.760 19:09:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:54.760 00:08:54.760 real 0m8.454s 00:08:54.760 user 0m14.047s 00:08:54.760 sys 0m2.052s 00:08:54.760 19:09:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:54.760 19:09:32 -- common/autotest_common.sh@10 -- # set +x 00:08:54.760 ************************************ 00:08:54.760 END TEST raid_state_function_test 00:08:54.760 ************************************ 00:08:54.760 19:09:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:54.761 19:09:32 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:08:54.761 19:09:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:54.761 19:09:32 -- common/autotest_common.sh@10 -- # set +x 00:08:55.020 ************************************ 00:08:55.020 START TEST raid_state_function_test_sb 00:08:55.020 ************************************ 00:08:55.020 19:09:32 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 3 true 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=50658 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50658' 00:08:55.020 Process raid pid: 50658 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50658 /var/tmp/spdk-raid.sock 00:08:55.020 19:09:32 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:55.020 19:09:32 -- common/autotest_common.sh@817 -- # '[' -z 50658 ']' 00:08:55.020 19:09:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:55.020 19:09:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:55.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:55.020 19:09:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:55.020 19:09:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:55.020 19:09:32 -- common/autotest_common.sh@10 -- # set +x 00:08:55.020 [2024-02-14 19:09:32.195555] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:08:55.020 [2024-02-14 19:09:32.195758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:55.588 EAL: TSC is not safe to use in SMP mode 00:08:55.588 EAL: TSC is not invariant 00:08:55.588 [2024-02-14 19:09:32.921087] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.847 [2024-02-14 19:09:33.031387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.847 [2024-02-14 19:09:33.031839] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.847 [2024-02-14 19:09:33.031843] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.848 19:09:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:55.848 19:09:33 -- common/autotest_common.sh@850 -- # return 0 00:08:55.848 19:09:33 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:56.106 [2024-02-14 19:09:33.406715] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.106 [2024-02-14 19:09:33.406777] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.106 [2024-02-14 19:09:33.406782] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.106 [2024-02-14 19:09:33.406789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.106 [2024-02-14 19:09:33.406792] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.106 [2024-02-14 19:09:33.406798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.106 19:09:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.364 19:09:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:56.364 "name": "Existed_Raid", 00:08:56.364 "uuid": "96542376-cb6c-11ee-af6b-4feeebbbadda", 00:08:56.364 "strip_size_kb": 64, 00:08:56.364 "state": "configuring", 00:08:56.364 "raid_level": "raid0", 00:08:56.364 "superblock": true, 00:08:56.364 "num_base_bdevs": 3, 00:08:56.364 "num_base_bdevs_discovered": 0, 00:08:56.364 "num_base_bdevs_operational": 3, 00:08:56.364 "base_bdevs_list": [ 00:08:56.364 { 00:08:56.364 "name": "BaseBdev1", 00:08:56.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.364 "is_configured": false, 00:08:56.365 "data_offset": 0, 00:08:56.365 "data_size": 0 00:08:56.365 }, 00:08:56.365 { 00:08:56.365 "name": "BaseBdev2", 00:08:56.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.365 "is_configured": false, 00:08:56.365 "data_offset": 0, 00:08:56.365 "data_size": 0 00:08:56.365 }, 00:08:56.365 { 00:08:56.365 "name": "BaseBdev3", 00:08:56.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.365 "is_configured": false, 00:08:56.365 "data_offset": 0, 00:08:56.365 "data_size": 0 00:08:56.365 } 00:08:56.365 ] 00:08:56.365 }' 00:08:56.365 19:09:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:56.365 19:09:33 -- common/autotest_common.sh@10 -- # set +x 00:08:56.623 19:09:33 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:56.882 [2024-02-14 19:09:34.090718] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:56.882 [2024-02-14 19:09:34.090743] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b816500 name Existed_Raid, state configuring 00:08:56.882 19:09:34 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:56.882 [2024-02-14 19:09:34.298747] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.882 [2024-02-14 19:09:34.298796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.882 [2024-02-14 19:09:34.298799] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.882 [2024-02-14 19:09:34.298806] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.882 [2024-02-14 19:09:34.298809] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.882 [2024-02-14 19:09:34.298815] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.141 19:09:34 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.141 [2024-02-14 19:09:34.519934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.141 BaseBdev1 00:08:57.141 19:09:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:57.141 19:09:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:57.141 19:09:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:57.141 19:09:34 -- common/autotest_common.sh@887 -- # local i 00:08:57.141 19:09:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:57.141 19:09:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:57.141 19:09:34 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:57.400 19:09:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.659 [ 00:08:57.659 { 00:08:57.659 "name": "BaseBdev1", 00:08:57.659 "aliases": [ 00:08:57.659 "96fdd28a-cb6c-11ee-af6b-4feeebbbadda" 00:08:57.659 ], 00:08:57.659 "product_name": "Malloc disk", 00:08:57.659 "block_size": 512, 00:08:57.659 "num_blocks": 65536, 00:08:57.659 "uuid": "96fdd28a-cb6c-11ee-af6b-4feeebbbadda", 00:08:57.659 "assigned_rate_limits": { 00:08:57.659 "rw_ios_per_sec": 0, 00:08:57.659 "rw_mbytes_per_sec": 0, 00:08:57.659 "r_mbytes_per_sec": 0, 00:08:57.659 "w_mbytes_per_sec": 0 00:08:57.659 }, 00:08:57.659 "claimed": true, 00:08:57.659 "claim_type": "exclusive_write", 00:08:57.659 "zoned": false, 00:08:57.659 "supported_io_types": { 00:08:57.659 "read": true, 00:08:57.659 "write": true, 00:08:57.659 "unmap": true, 00:08:57.659 "write_zeroes": true, 00:08:57.659 "flush": true, 00:08:57.659 "reset": true, 00:08:57.659 "compare": false, 00:08:57.659 "compare_and_write": false, 00:08:57.659 "abort": true, 00:08:57.659 "nvme_admin": false, 00:08:57.659 "nvme_io": false 00:08:57.659 }, 00:08:57.659 "memory_domains": [ 00:08:57.659 { 00:08:57.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.659 "dma_device_type": 2 00:08:57.659 } 00:08:57.659 ], 00:08:57.659 "driver_specific": {} 00:08:57.659 } 00:08:57.659 ] 00:08:57.659 19:09:35 -- common/autotest_common.sh@893 -- # return 0 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.659 19:09:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.918 19:09:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:57.918 "name": "Existed_Raid", 00:08:57.918 "uuid": "96dc4099-cb6c-11ee-af6b-4feeebbbadda", 00:08:57.918 "strip_size_kb": 64, 00:08:57.918 "state": "configuring", 00:08:57.918 "raid_level": "raid0", 00:08:57.918 "superblock": true, 00:08:57.918 "num_base_bdevs": 3, 00:08:57.918 "num_base_bdevs_discovered": 1, 00:08:57.918 "num_base_bdevs_operational": 3, 00:08:57.918 "base_bdevs_list": [ 00:08:57.918 { 00:08:57.918 "name": "BaseBdev1", 00:08:57.918 "uuid": "96fdd28a-cb6c-11ee-af6b-4feeebbbadda", 00:08:57.918 "is_configured": true, 00:08:57.918 "data_offset": 2048, 00:08:57.918 "data_size": 63488 00:08:57.918 }, 00:08:57.918 { 00:08:57.918 "name": "BaseBdev2", 00:08:57.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.918 "is_configured": false, 00:08:57.918 "data_offset": 0, 00:08:57.918 "data_size": 0 00:08:57.918 }, 00:08:57.918 { 00:08:57.918 "name": "BaseBdev3", 00:08:57.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.918 "is_configured": false, 00:08:57.918 "data_offset": 0, 00:08:57.918 "data_size": 0 00:08:57.918 } 00:08:57.918 ] 00:08:57.918 }' 00:08:57.918 19:09:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:57.918 19:09:35 -- common/autotest_common.sh@10 -- # set +x 00:08:58.177 19:09:35 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:58.437 [2024-02-14 19:09:35.726780] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.437 [2024-02-14 19:09:35.726811] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b816500 name Existed_Raid, state configuring 00:08:58.437 19:09:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:58.437 19:09:35 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:58.696 19:09:35 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.954 BaseBdev1 00:08:58.954 19:09:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:58.954 19:09:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:08:58.954 19:09:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:08:58.954 19:09:36 -- common/autotest_common.sh@887 -- # local i 00:08:58.954 19:09:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:08:58.954 19:09:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:08:58.954 19:09:36 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:59.213 19:09:36 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.213 [ 00:08:59.213 { 00:08:59.213 "name": "BaseBdev1", 00:08:59.213 "aliases": [ 00:08:59.213 "97fa0aa9-cb6c-11ee-af6b-4feeebbbadda" 00:08:59.213 ], 00:08:59.213 "product_name": "Malloc disk", 00:08:59.213 "block_size": 512, 00:08:59.213 "num_blocks": 65536, 00:08:59.213 "uuid": "97fa0aa9-cb6c-11ee-af6b-4feeebbbadda", 00:08:59.213 "assigned_rate_limits": { 00:08:59.213 "rw_ios_per_sec": 0, 00:08:59.213 "rw_mbytes_per_sec": 0, 00:08:59.213 "r_mbytes_per_sec": 0, 00:08:59.213 "w_mbytes_per_sec": 0 00:08:59.213 }, 00:08:59.213 "claimed": false, 00:08:59.213 "zoned": false, 00:08:59.213 "supported_io_types": { 00:08:59.213 "read": true, 00:08:59.213 "write": true, 00:08:59.213 "unmap": true, 00:08:59.213 "write_zeroes": true, 00:08:59.213 "flush": true, 00:08:59.213 "reset": true, 00:08:59.213 "compare": false, 00:08:59.213 "compare_and_write": false, 00:08:59.213 "abort": true, 00:08:59.213 "nvme_admin": false, 00:08:59.213 "nvme_io": false 00:08:59.213 }, 00:08:59.213 "memory_domains": [ 00:08:59.213 { 00:08:59.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.213 "dma_device_type": 2 00:08:59.213 } 00:08:59.213 ], 00:08:59.213 "driver_specific": {} 00:08:59.213 } 00:08:59.213 ] 00:08:59.213 19:09:36 -- common/autotest_common.sh@893 -- # return 0 00:08:59.213 19:09:36 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:59.473 [2024-02-14 19:09:36.767718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.473 [2024-02-14 19:09:36.768388] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.473 [2024-02-14 19:09:36.768431] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.473 [2024-02-14 19:09:36.768436] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.473 [2024-02-14 19:09:36.768443] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.473 19:09:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.732 19:09:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:59.732 "name": "Existed_Raid", 00:08:59.732 "uuid": "9854fccc-cb6c-11ee-af6b-4feeebbbadda", 00:08:59.732 "strip_size_kb": 64, 00:08:59.732 "state": "configuring", 00:08:59.732 "raid_level": "raid0", 00:08:59.732 "superblock": true, 00:08:59.732 "num_base_bdevs": 3, 00:08:59.732 "num_base_bdevs_discovered": 1, 00:08:59.732 "num_base_bdevs_operational": 3, 00:08:59.732 "base_bdevs_list": [ 00:08:59.732 { 00:08:59.732 "name": "BaseBdev1", 00:08:59.732 "uuid": "97fa0aa9-cb6c-11ee-af6b-4feeebbbadda", 00:08:59.732 "is_configured": true, 00:08:59.732 "data_offset": 2048, 00:08:59.732 "data_size": 63488 00:08:59.732 }, 00:08:59.732 { 00:08:59.732 "name": "BaseBdev2", 00:08:59.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.732 "is_configured": false, 00:08:59.732 "data_offset": 0, 00:08:59.732 "data_size": 0 00:08:59.732 }, 00:08:59.732 { 00:08:59.732 "name": "BaseBdev3", 00:08:59.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.732 "is_configured": false, 00:08:59.732 "data_offset": 0, 00:08:59.732 "data_size": 0 00:08:59.732 } 00:08:59.732 ] 00:08:59.732 }' 00:08:59.732 19:09:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:59.732 19:09:37 -- common/autotest_common.sh@10 -- # set +x 00:08:59.992 19:09:37 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.251 [2024-02-14 19:09:37.491878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.251 BaseBdev2 00:09:00.251 19:09:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:00.251 19:09:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:09:00.251 19:09:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:00.251 19:09:37 -- common/autotest_common.sh@887 -- # local i 00:09:00.251 19:09:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:00.251 19:09:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:00.251 19:09:37 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:00.530 19:09:37 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.530 [ 00:09:00.530 { 00:09:00.530 "name": "BaseBdev2", 00:09:00.531 "aliases": [ 00:09:00.531 "98c37704-cb6c-11ee-af6b-4feeebbbadda" 00:09:00.531 ], 00:09:00.531 "product_name": "Malloc disk", 00:09:00.531 "block_size": 512, 00:09:00.531 "num_blocks": 65536, 00:09:00.531 "uuid": "98c37704-cb6c-11ee-af6b-4feeebbbadda", 00:09:00.531 "assigned_rate_limits": { 00:09:00.531 "rw_ios_per_sec": 0, 00:09:00.531 "rw_mbytes_per_sec": 0, 00:09:00.531 "r_mbytes_per_sec": 0, 00:09:00.531 "w_mbytes_per_sec": 0 00:09:00.531 }, 00:09:00.531 "claimed": true, 00:09:00.531 "claim_type": "exclusive_write", 00:09:00.531 "zoned": false, 00:09:00.531 "supported_io_types": { 00:09:00.531 "read": true, 00:09:00.531 "write": true, 00:09:00.531 "unmap": true, 00:09:00.531 "write_zeroes": true, 00:09:00.531 "flush": true, 00:09:00.531 "reset": true, 00:09:00.531 "compare": false, 00:09:00.531 "compare_and_write": false, 00:09:00.531 "abort": true, 00:09:00.531 "nvme_admin": false, 00:09:00.531 "nvme_io": false 00:09:00.531 }, 00:09:00.531 "memory_domains": [ 00:09:00.531 { 00:09:00.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.531 "dma_device_type": 2 00:09:00.531 } 00:09:00.531 ], 00:09:00.531 "driver_specific": {} 00:09:00.531 } 00:09:00.531 ] 00:09:00.531 19:09:37 -- common/autotest_common.sh@893 -- # return 0 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.531 19:09:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.804 19:09:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:00.804 "name": "Existed_Raid", 00:09:00.804 "uuid": "9854fccc-cb6c-11ee-af6b-4feeebbbadda", 00:09:00.804 "strip_size_kb": 64, 00:09:00.804 "state": "configuring", 00:09:00.804 "raid_level": "raid0", 00:09:00.804 "superblock": true, 00:09:00.804 "num_base_bdevs": 3, 00:09:00.804 "num_base_bdevs_discovered": 2, 00:09:00.804 "num_base_bdevs_operational": 3, 00:09:00.804 "base_bdevs_list": [ 00:09:00.804 { 00:09:00.804 "name": "BaseBdev1", 00:09:00.804 "uuid": "97fa0aa9-cb6c-11ee-af6b-4feeebbbadda", 00:09:00.804 "is_configured": true, 00:09:00.804 "data_offset": 2048, 00:09:00.804 "data_size": 63488 00:09:00.804 }, 00:09:00.804 { 00:09:00.804 "name": "BaseBdev2", 00:09:00.805 "uuid": "98c37704-cb6c-11ee-af6b-4feeebbbadda", 00:09:00.805 "is_configured": true, 00:09:00.805 "data_offset": 2048, 00:09:00.805 "data_size": 63488 00:09:00.805 }, 00:09:00.805 { 00:09:00.805 "name": "BaseBdev3", 00:09:00.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.805 "is_configured": false, 00:09:00.805 "data_offset": 0, 00:09:00.805 "data_size": 0 00:09:00.805 } 00:09:00.805 ] 00:09:00.805 }' 00:09:00.805 19:09:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:00.805 19:09:38 -- common/autotest_common.sh@10 -- # set +x 00:09:01.063 19:09:38 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.321 [2024-02-14 19:09:38.595927] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.322 [2024-02-14 19:09:38.596025] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b816a00 00:09:01.322 [2024-02-14 19:09:38.596030] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.322 [2024-02-14 19:09:38.596046] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b879ec0 00:09:01.322 [2024-02-14 19:09:38.596103] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b816a00 00:09:01.322 [2024-02-14 19:09:38.596106] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b816a00 00:09:01.322 [2024-02-14 19:09:38.596121] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.322 BaseBdev3 00:09:01.322 19:09:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:01.322 19:09:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:09:01.322 19:09:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:01.322 19:09:38 -- common/autotest_common.sh@887 -- # local i 00:09:01.322 19:09:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:01.322 19:09:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:01.322 19:09:38 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:01.581 19:09:38 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.581 [ 00:09:01.581 { 00:09:01.581 "name": "BaseBdev3", 00:09:01.581 "aliases": [ 00:09:01.581 "996bef5e-cb6c-11ee-af6b-4feeebbbadda" 00:09:01.581 ], 00:09:01.581 "product_name": "Malloc disk", 00:09:01.581 "block_size": 512, 00:09:01.581 "num_blocks": 65536, 00:09:01.581 "uuid": "996bef5e-cb6c-11ee-af6b-4feeebbbadda", 00:09:01.581 "assigned_rate_limits": { 00:09:01.581 "rw_ios_per_sec": 0, 00:09:01.581 "rw_mbytes_per_sec": 0, 00:09:01.581 "r_mbytes_per_sec": 0, 00:09:01.581 "w_mbytes_per_sec": 0 00:09:01.581 }, 00:09:01.581 "claimed": true, 00:09:01.581 "claim_type": "exclusive_write", 00:09:01.581 "zoned": false, 00:09:01.581 "supported_io_types": { 00:09:01.581 "read": true, 00:09:01.581 "write": true, 00:09:01.581 "unmap": true, 00:09:01.581 "write_zeroes": true, 00:09:01.581 "flush": true, 00:09:01.581 "reset": true, 00:09:01.581 "compare": false, 00:09:01.581 "compare_and_write": false, 00:09:01.581 "abort": true, 00:09:01.581 "nvme_admin": false, 00:09:01.581 "nvme_io": false 00:09:01.581 }, 00:09:01.581 "memory_domains": [ 00:09:01.581 { 00:09:01.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.581 "dma_device_type": 2 00:09:01.581 } 00:09:01.581 ], 00:09:01.581 "driver_specific": {} 00:09:01.581 } 00:09:01.581 ] 00:09:01.581 19:09:38 -- common/autotest_common.sh@893 -- # return 0 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.581 19:09:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.840 19:09:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:01.840 "name": "Existed_Raid", 00:09:01.840 "uuid": "9854fccc-cb6c-11ee-af6b-4feeebbbadda", 00:09:01.840 "strip_size_kb": 64, 00:09:01.840 "state": "online", 00:09:01.840 "raid_level": "raid0", 00:09:01.840 "superblock": true, 00:09:01.840 "num_base_bdevs": 3, 00:09:01.840 "num_base_bdevs_discovered": 3, 00:09:01.840 "num_base_bdevs_operational": 3, 00:09:01.840 "base_bdevs_list": [ 00:09:01.840 { 00:09:01.840 "name": "BaseBdev1", 00:09:01.840 "uuid": "97fa0aa9-cb6c-11ee-af6b-4feeebbbadda", 00:09:01.840 "is_configured": true, 00:09:01.840 "data_offset": 2048, 00:09:01.840 "data_size": 63488 00:09:01.840 }, 00:09:01.840 { 00:09:01.840 "name": "BaseBdev2", 00:09:01.840 "uuid": "98c37704-cb6c-11ee-af6b-4feeebbbadda", 00:09:01.840 "is_configured": true, 00:09:01.840 "data_offset": 2048, 00:09:01.840 "data_size": 63488 00:09:01.840 }, 00:09:01.840 { 00:09:01.840 "name": "BaseBdev3", 00:09:01.840 "uuid": "996bef5e-cb6c-11ee-af6b-4feeebbbadda", 00:09:01.840 "is_configured": true, 00:09:01.840 "data_offset": 2048, 00:09:01.840 "data_size": 63488 00:09:01.840 } 00:09:01.840 ] 00:09:01.840 }' 00:09:01.840 19:09:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:01.840 19:09:39 -- common/autotest_common.sh@10 -- # set +x 00:09:02.099 19:09:39 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:02.358 [2024-02-14 19:09:39.735863] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.358 [2024-02-14 19:09:39.735886] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.358 [2024-02-14 19:09:39.735898] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.358 19:09:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.617 19:09:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:02.617 "name": "Existed_Raid", 00:09:02.617 "uuid": "9854fccc-cb6c-11ee-af6b-4feeebbbadda", 00:09:02.617 "strip_size_kb": 64, 00:09:02.617 "state": "offline", 00:09:02.617 "raid_level": "raid0", 00:09:02.617 "superblock": true, 00:09:02.617 "num_base_bdevs": 3, 00:09:02.617 "num_base_bdevs_discovered": 2, 00:09:02.617 "num_base_bdevs_operational": 2, 00:09:02.617 "base_bdevs_list": [ 00:09:02.617 { 00:09:02.617 "name": null, 00:09:02.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.617 "is_configured": false, 00:09:02.617 "data_offset": 2048, 00:09:02.617 "data_size": 63488 00:09:02.617 }, 00:09:02.617 { 00:09:02.617 "name": "BaseBdev2", 00:09:02.617 "uuid": "98c37704-cb6c-11ee-af6b-4feeebbbadda", 00:09:02.617 "is_configured": true, 00:09:02.617 "data_offset": 2048, 00:09:02.617 "data_size": 63488 00:09:02.617 }, 00:09:02.617 { 00:09:02.617 "name": "BaseBdev3", 00:09:02.617 "uuid": "996bef5e-cb6c-11ee-af6b-4feeebbbadda", 00:09:02.617 "is_configured": true, 00:09:02.617 "data_offset": 2048, 00:09:02.617 "data_size": 63488 00:09:02.617 } 00:09:02.617 ] 00:09:02.617 }' 00:09:02.617 19:09:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:02.617 19:09:39 -- common/autotest_common.sh@10 -- # set +x 00:09:02.876 19:09:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:02.876 19:09:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:02.876 19:09:40 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.876 19:09:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:03.136 19:09:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:03.136 19:09:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.136 19:09:40 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:03.395 [2024-02-14 19:09:40.592831] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:03.395 19:09:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:03.395 19:09:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:03.395 19:09:40 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.395 19:09:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:03.654 19:09:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:03.654 19:09:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:03.654 19:09:40 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:03.914 [2024-02-14 19:09:41.089755] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.914 [2024-02-14 19:09:41.089779] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b816a00 name Existed_Raid, state offline 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:03.914 19:09:41 -- bdev/bdev_raid.sh@287 -- # killprocess 50658 00:09:03.914 19:09:41 -- common/autotest_common.sh@924 -- # '[' -z 50658 ']' 00:09:03.914 19:09:41 -- common/autotest_common.sh@928 -- # kill -0 50658 00:09:03.914 19:09:41 -- common/autotest_common.sh@929 -- # uname 00:09:03.914 19:09:41 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:09:03.914 19:09:41 -- common/autotest_common.sh@932 -- # ps -c -o command 50658 00:09:03.914 19:09:41 -- common/autotest_common.sh@932 -- # tail -1 00:09:03.914 19:09:41 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:09:03.914 killing process with pid 50658 00:09:03.914 19:09:41 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:09:03.914 19:09:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 50658' 00:09:03.914 19:09:41 -- common/autotest_common.sh@943 -- # kill 50658 00:09:03.914 [2024-02-14 19:09:41.308611] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:03.914 [2024-02-14 19:09:41.308655] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.914 19:09:41 -- common/autotest_common.sh@948 -- # wait 50658 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:04.174 00:09:04.174 real 0m9.355s 00:09:04.174 user 0m15.717s 00:09:04.174 sys 0m2.146s 00:09:04.174 ************************************ 00:09:04.174 END TEST raid_state_function_test_sb 00:09:04.174 ************************************ 00:09:04.174 19:09:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:04.174 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:04.174 19:09:41 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:09:04.174 19:09:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:04.174 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.174 ************************************ 00:09:04.174 START TEST raid_superblock_test 00:09:04.174 ************************************ 00:09:04.174 19:09:41 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid0 3 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@357 -- # raid_pid=50894 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50894 /var/tmp/spdk-raid.sock 00:09:04.174 19:09:41 -- common/autotest_common.sh@817 -- # '[' -z 50894 ']' 00:09:04.174 19:09:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:04.174 19:09:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:04.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:04.174 19:09:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:04.174 19:09:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:04.174 19:09:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.174 19:09:41 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:04.433 [2024-02-14 19:09:41.599610] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:04.433 [2024-02-14 19:09:41.599932] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:05.002 EAL: TSC is not safe to use in SMP mode 00:09:05.002 EAL: TSC is not invariant 00:09:05.002 [2024-02-14 19:09:42.320483] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.261 [2024-02-14 19:09:42.431254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.261 [2024-02-14 19:09:42.431724] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.261 [2024-02-14 19:09:42.431729] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.261 19:09:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:05.261 19:09:42 -- common/autotest_common.sh@850 -- # return 0 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.261 19:09:42 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:05.519 malloc1 00:09:05.520 19:09:42 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.779 [2024-02-14 19:09:42.958564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.779 [2024-02-14 19:09:42.958631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.779 [2024-02-14 19:09:42.959198] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90a780 00:09:05.779 [2024-02-14 19:09:42.959223] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.779 [2024-02-14 19:09:42.960214] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.779 [2024-02-14 19:09:42.960241] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.779 pt1 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:05.779 19:09:42 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:05.779 malloc2 00:09:05.779 19:09:43 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.042 [2024-02-14 19:09:43.370587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.043 [2024-02-14 19:09:43.370659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.043 [2024-02-14 19:09:43.370688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90ac80 00:09:06.043 [2024-02-14 19:09:43.370695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.043 [2024-02-14 19:09:43.371374] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.043 [2024-02-14 19:09:43.371397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.043 pt2 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:06.043 19:09:43 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:06.306 malloc3 00:09:06.306 19:09:43 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:06.565 [2024-02-14 19:09:43.734599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:06.565 [2024-02-14 19:09:43.734668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.565 [2024-02-14 19:09:43.734698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90b180 00:09:06.565 [2024-02-14 19:09:43.734705] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.565 [2024-02-14 19:09:43.735387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.565 [2024-02-14 19:09:43.735416] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:06.565 pt3 00:09:06.565 19:09:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:06.565 19:09:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:06.565 19:09:43 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:06.824 [2024-02-14 19:09:43.990617] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:06.824 [2024-02-14 19:09:43.991231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.824 [2024-02-14 19:09:43.991250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:06.824 [2024-02-14 19:09:43.991296] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c90b400 00:09:06.824 [2024-02-14 19:09:43.991300] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.824 [2024-02-14 19:09:43.991331] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c96de20 00:09:06.824 [2024-02-14 19:09:43.991396] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c90b400 00:09:06.824 [2024-02-14 19:09:43.991399] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c90b400 00:09:06.824 [2024-02-14 19:09:43.991417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:06.824 19:09:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:06.825 19:09:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:06.825 19:09:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:06.825 19:09:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.825 19:09:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.825 19:09:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:06.825 "name": "raid_bdev1", 00:09:06.825 "uuid": "9ca31d45-cb6c-11ee-af6b-4feeebbbadda", 00:09:06.825 "strip_size_kb": 64, 00:09:06.825 "state": "online", 00:09:06.825 "raid_level": "raid0", 00:09:06.825 "superblock": true, 00:09:06.825 "num_base_bdevs": 3, 00:09:06.825 "num_base_bdevs_discovered": 3, 00:09:06.825 "num_base_bdevs_operational": 3, 00:09:06.825 "base_bdevs_list": [ 00:09:06.825 { 00:09:06.825 "name": "pt1", 00:09:06.825 "uuid": "7ea93164-1d73-005e-9679-1f8a88de7dfa", 00:09:06.825 "is_configured": true, 00:09:06.825 "data_offset": 2048, 00:09:06.825 "data_size": 63488 00:09:06.825 }, 00:09:06.825 { 00:09:06.825 "name": "pt2", 00:09:06.825 "uuid": "c07a9539-3d65-4950-8e09-180454d737f4", 00:09:06.825 "is_configured": true, 00:09:06.825 "data_offset": 2048, 00:09:06.825 "data_size": 63488 00:09:06.825 }, 00:09:06.825 { 00:09:06.825 "name": "pt3", 00:09:06.825 "uuid": "e8e672c5-ea81-fc54-8a41-1077d054bc2e", 00:09:06.825 "is_configured": true, 00:09:06.825 "data_offset": 2048, 00:09:06.825 "data_size": 63488 00:09:06.825 } 00:09:06.825 ] 00:09:06.825 }' 00:09:06.825 19:09:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:06.825 19:09:44 -- common/autotest_common.sh@10 -- # set +x 00:09:07.084 19:09:44 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:07.084 19:09:44 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:07.343 [2024-02-14 19:09:44.678640] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.343 19:09:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9ca31d45-cb6c-11ee-af6b-4feeebbbadda 00:09:07.343 19:09:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 9ca31d45-cb6c-11ee-af6b-4feeebbbadda ']' 00:09:07.343 19:09:44 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:07.602 [2024-02-14 19:09:44.862615] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.602 [2024-02-14 19:09:44.862632] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.602 [2024-02-14 19:09:44.862645] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.602 [2024-02-14 19:09:44.862674] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.602 [2024-02-14 19:09:44.862678] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c90b400 name raid_bdev1, state offline 00:09:07.602 19:09:44 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.602 19:09:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:07.862 19:09:45 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:07.862 19:09:45 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:07.862 19:09:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:07.862 19:09:45 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:08.121 19:09:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.121 19:09:45 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:08.121 19:09:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.121 19:09:45 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:08.380 19:09:45 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:08.380 19:09:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:08.639 19:09:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:08.639 19:09:45 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:08.639 19:09:45 -- common/autotest_common.sh@638 -- # local es=0 00:09:08.639 19:09:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:08.639 19:09:45 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.639 19:09:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.639 19:09:45 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.639 19:09:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.639 19:09:45 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.639 19:09:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:08.639 19:09:45 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.639 19:09:45 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:08.639 19:09:45 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:08.898 [2024-02-14 19:09:46.182684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:08.898 [2024-02-14 19:09:46.183416] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:08.898 [2024-02-14 19:09:46.183434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:08.898 [2024-02-14 19:09:46.183446] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:08.899 [2024-02-14 19:09:46.183484] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:08.899 [2024-02-14 19:09:46.183493] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:08.899 [2024-02-14 19:09:46.183500] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.899 [2024-02-14 19:09:46.183504] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c90b180 name raid_bdev1, state configuring 00:09:08.899 request: 00:09:08.899 { 00:09:08.899 "name": "raid_bdev1", 00:09:08.899 "raid_level": "raid0", 00:09:08.899 "base_bdevs": [ 00:09:08.899 "malloc1", 00:09:08.899 "malloc2", 00:09:08.899 "malloc3" 00:09:08.899 ], 00:09:08.899 "superblock": false, 00:09:08.899 "strip_size_kb": 64, 00:09:08.899 "method": "bdev_raid_create", 00:09:08.899 "req_id": 1 00:09:08.899 } 00:09:08.899 Got JSON-RPC error response 00:09:08.899 response: 00:09:08.899 { 00:09:08.899 "code": -17, 00:09:08.899 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:08.899 } 00:09:08.899 19:09:46 -- common/autotest_common.sh@641 -- # es=1 00:09:08.899 19:09:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:08.899 19:09:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:08.899 19:09:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:08.899 19:09:46 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.899 19:09:46 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:09.157 19:09:46 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:09.157 19:09:46 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:09.157 19:09:46 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.416 [2024-02-14 19:09:46.722709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.416 [2024-02-14 19:09:46.722768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.416 [2024-02-14 19:09:46.722800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90ac80 00:09:09.416 [2024-02-14 19:09:46.722807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.416 [2024-02-14 19:09:46.723575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.416 [2024-02-14 19:09:46.723602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.416 [2024-02-14 19:09:46.723622] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:09.416 [2024-02-14 19:09:46.723633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.416 pt1 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.416 19:09:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.674 19:09:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:09.674 "name": "raid_bdev1", 00:09:09.674 "uuid": "9ca31d45-cb6c-11ee-af6b-4feeebbbadda", 00:09:09.674 "strip_size_kb": 64, 00:09:09.674 "state": "configuring", 00:09:09.674 "raid_level": "raid0", 00:09:09.674 "superblock": true, 00:09:09.674 "num_base_bdevs": 3, 00:09:09.674 "num_base_bdevs_discovered": 1, 00:09:09.674 "num_base_bdevs_operational": 3, 00:09:09.674 "base_bdevs_list": [ 00:09:09.674 { 00:09:09.674 "name": "pt1", 00:09:09.674 "uuid": "7ea93164-1d73-005e-9679-1f8a88de7dfa", 00:09:09.674 "is_configured": true, 00:09:09.674 "data_offset": 2048, 00:09:09.674 "data_size": 63488 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "name": null, 00:09:09.674 "uuid": "c07a9539-3d65-4950-8e09-180454d737f4", 00:09:09.674 "is_configured": false, 00:09:09.674 "data_offset": 2048, 00:09:09.674 "data_size": 63488 00:09:09.674 }, 00:09:09.674 { 00:09:09.674 "name": null, 00:09:09.674 "uuid": "e8e672c5-ea81-fc54-8a41-1077d054bc2e", 00:09:09.674 "is_configured": false, 00:09:09.674 "data_offset": 2048, 00:09:09.674 "data_size": 63488 00:09:09.674 } 00:09:09.674 ] 00:09:09.674 }' 00:09:09.674 19:09:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:09.674 19:09:46 -- common/autotest_common.sh@10 -- # set +x 00:09:09.933 19:09:47 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:09:09.933 19:09:47 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.195 [2024-02-14 19:09:47.434728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.195 [2024-02-14 19:09:47.434801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.195 [2024-02-14 19:09:47.434831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90b680 00:09:10.195 [2024-02-14 19:09:47.434838] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.195 [2024-02-14 19:09:47.434943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.195 [2024-02-14 19:09:47.434951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.195 [2024-02-14 19:09:47.434968] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:10.195 [2024-02-14 19:09:47.434974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.195 pt2 00:09:10.195 19:09:47 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:10.457 [2024-02-14 19:09:47.698748] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.457 19:09:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.715 19:09:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:10.715 "name": "raid_bdev1", 00:09:10.715 "uuid": "9ca31d45-cb6c-11ee-af6b-4feeebbbadda", 00:09:10.715 "strip_size_kb": 64, 00:09:10.715 "state": "configuring", 00:09:10.715 "raid_level": "raid0", 00:09:10.715 "superblock": true, 00:09:10.715 "num_base_bdevs": 3, 00:09:10.715 "num_base_bdevs_discovered": 1, 00:09:10.715 "num_base_bdevs_operational": 3, 00:09:10.715 "base_bdevs_list": [ 00:09:10.715 { 00:09:10.715 "name": "pt1", 00:09:10.715 "uuid": "7ea93164-1d73-005e-9679-1f8a88de7dfa", 00:09:10.715 "is_configured": true, 00:09:10.715 "data_offset": 2048, 00:09:10.715 "data_size": 63488 00:09:10.715 }, 00:09:10.715 { 00:09:10.715 "name": null, 00:09:10.715 "uuid": "c07a9539-3d65-4950-8e09-180454d737f4", 00:09:10.715 "is_configured": false, 00:09:10.715 "data_offset": 2048, 00:09:10.715 "data_size": 63488 00:09:10.715 }, 00:09:10.715 { 00:09:10.716 "name": null, 00:09:10.716 "uuid": "e8e672c5-ea81-fc54-8a41-1077d054bc2e", 00:09:10.716 "is_configured": false, 00:09:10.716 "data_offset": 2048, 00:09:10.716 "data_size": 63488 00:09:10.716 } 00:09:10.716 ] 00:09:10.716 }' 00:09:10.716 19:09:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:10.716 19:09:47 -- common/autotest_common.sh@10 -- # set +x 00:09:10.974 19:09:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:10.974 19:09:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:10.974 19:09:48 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.233 [2024-02-14 19:09:48.422776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.233 [2024-02-14 19:09:48.422852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.233 [2024-02-14 19:09:48.422898] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90b680 00:09:11.233 [2024-02-14 19:09:48.422905] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.233 [2024-02-14 19:09:48.423032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.233 [2024-02-14 19:09:48.423040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.233 [2024-02-14 19:09:48.423058] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:11.233 [2024-02-14 19:09:48.423065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.233 pt2 00:09:11.233 19:09:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:11.233 19:09:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:11.233 19:09:48 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:11.497 [2024-02-14 19:09:48.686797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:11.497 [2024-02-14 19:09:48.686851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.497 [2024-02-14 19:09:48.686877] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c90b400 00:09:11.497 [2024-02-14 19:09:48.686884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.497 [2024-02-14 19:09:48.686988] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.497 [2024-02-14 19:09:48.686995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:11.497 [2024-02-14 19:09:48.687010] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:11.497 [2024-02-14 19:09:48.687016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:11.497 [2024-02-14 19:09:48.687039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c90a780 00:09:11.497 [2024-02-14 19:09:48.687043] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:11.497 [2024-02-14 19:09:48.687061] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c96de20 00:09:11.497 [2024-02-14 19:09:48.687106] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c90a780 00:09:11.497 [2024-02-14 19:09:48.687109] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c90a780 00:09:11.498 [2024-02-14 19:09:48.687125] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.498 pt3 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:11.498 "name": "raid_bdev1", 00:09:11.498 "uuid": "9ca31d45-cb6c-11ee-af6b-4feeebbbadda", 00:09:11.498 "strip_size_kb": 64, 00:09:11.498 "state": "online", 00:09:11.498 "raid_level": "raid0", 00:09:11.498 "superblock": true, 00:09:11.498 "num_base_bdevs": 3, 00:09:11.498 "num_base_bdevs_discovered": 3, 00:09:11.498 "num_base_bdevs_operational": 3, 00:09:11.498 "base_bdevs_list": [ 00:09:11.498 { 00:09:11.498 "name": "pt1", 00:09:11.498 "uuid": "7ea93164-1d73-005e-9679-1f8a88de7dfa", 00:09:11.498 "is_configured": true, 00:09:11.498 "data_offset": 2048, 00:09:11.498 "data_size": 63488 00:09:11.498 }, 00:09:11.498 { 00:09:11.498 "name": "pt2", 00:09:11.498 "uuid": "c07a9539-3d65-4950-8e09-180454d737f4", 00:09:11.498 "is_configured": true, 00:09:11.498 "data_offset": 2048, 00:09:11.498 "data_size": 63488 00:09:11.498 }, 00:09:11.498 { 00:09:11.498 "name": "pt3", 00:09:11.498 "uuid": "e8e672c5-ea81-fc54-8a41-1077d054bc2e", 00:09:11.498 "is_configured": true, 00:09:11.498 "data_offset": 2048, 00:09:11.498 "data_size": 63488 00:09:11.498 } 00:09:11.498 ] 00:09:11.498 }' 00:09:11.498 19:09:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:11.498 19:09:48 -- common/autotest_common.sh@10 -- # set +x 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:12.067 [2024-02-14 19:09:49.454847] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@430 -- # '[' 9ca31d45-cb6c-11ee-af6b-4feeebbbadda '!=' 9ca31d45-cb6c-11ee-af6b-4feeebbbadda ']' 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:12.067 19:09:49 -- bdev/bdev_raid.sh@511 -- # killprocess 50894 00:09:12.067 19:09:49 -- common/autotest_common.sh@924 -- # '[' -z 50894 ']' 00:09:12.067 19:09:49 -- common/autotest_common.sh@928 -- # kill -0 50894 00:09:12.067 19:09:49 -- common/autotest_common.sh@929 -- # uname 00:09:12.067 19:09:49 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:09:12.326 19:09:49 -- common/autotest_common.sh@932 -- # ps -c -o command 50894 00:09:12.326 19:09:49 -- common/autotest_common.sh@932 -- # tail -1 00:09:12.326 19:09:49 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:09:12.326 19:09:49 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:09:12.326 killing process with pid 50894 00:09:12.326 19:09:49 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 50894' 00:09:12.326 19:09:49 -- common/autotest_common.sh@943 -- # kill 50894 00:09:12.326 [2024-02-14 19:09:49.492617] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.326 [2024-02-14 19:09:49.492649] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.326 [2024-02-14 19:09:49.492665] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.326 [2024-02-14 19:09:49.492668] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c90a780 name raid_bdev1, state offline 00:09:12.326 19:09:49 -- common/autotest_common.sh@948 -- # wait 50894 00:09:12.326 [2024-02-14 19:09:49.519663] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:12.584 00:09:12.584 real 0m8.163s 00:09:12.584 user 0m13.815s 00:09:12.584 sys 0m1.657s 00:09:12.584 ************************************ 00:09:12.584 END TEST raid_superblock_test 00:09:12.584 ************************************ 00:09:12.584 19:09:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.584 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:12.584 19:09:49 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:09:12.584 19:09:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:12.584 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:12.584 ************************************ 00:09:12.584 START TEST raid_state_function_test 00:09:12.584 ************************************ 00:09:12.584 19:09:49 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 3 false 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@226 -- # raid_pid=51075 00:09:12.584 Process raid pid: 51075 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51075' 00:09:12.584 19:09:49 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51075 /var/tmp/spdk-raid.sock 00:09:12.584 19:09:49 -- common/autotest_common.sh@817 -- # '[' -z 51075 ']' 00:09:12.585 19:09:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:12.585 19:09:49 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:12.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:12.585 19:09:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:12.585 19:09:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:12.585 19:09:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:12.585 19:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:12.585 [2024-02-14 19:09:49.812745] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:12.585 [2024-02-14 19:09:49.813117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:13.152 EAL: TSC is not safe to use in SMP mode 00:09:13.152 EAL: TSC is not invariant 00:09:13.152 [2024-02-14 19:09:50.542770] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.411 [2024-02-14 19:09:50.654392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.411 [2024-02-14 19:09:50.654857] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.411 [2024-02-14 19:09:50.654861] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.411 19:09:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:13.411 19:09:50 -- common/autotest_common.sh@850 -- # return 0 00:09:13.411 19:09:50 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:13.670 [2024-02-14 19:09:51.021586] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.670 [2024-02-14 19:09:51.021646] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.670 [2024-02-14 19:09:51.021651] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.670 [2024-02-14 19:09:51.021658] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.670 [2024-02-14 19:09:51.021661] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.670 [2024-02-14 19:09:51.021668] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.670 19:09:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.930 19:09:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:13.930 "name": "Existed_Raid", 00:09:13.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.930 "strip_size_kb": 64, 00:09:13.930 "state": "configuring", 00:09:13.930 "raid_level": "concat", 00:09:13.930 "superblock": false, 00:09:13.930 "num_base_bdevs": 3, 00:09:13.930 "num_base_bdevs_discovered": 0, 00:09:13.930 "num_base_bdevs_operational": 3, 00:09:13.930 "base_bdevs_list": [ 00:09:13.930 { 00:09:13.930 "name": "BaseBdev1", 00:09:13.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.930 "is_configured": false, 00:09:13.930 "data_offset": 0, 00:09:13.930 "data_size": 0 00:09:13.930 }, 00:09:13.930 { 00:09:13.930 "name": "BaseBdev2", 00:09:13.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.930 "is_configured": false, 00:09:13.930 "data_offset": 0, 00:09:13.930 "data_size": 0 00:09:13.930 }, 00:09:13.930 { 00:09:13.930 "name": "BaseBdev3", 00:09:13.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.930 "is_configured": false, 00:09:13.930 "data_offset": 0, 00:09:13.930 "data_size": 0 00:09:13.930 } 00:09:13.930 ] 00:09:13.930 }' 00:09:13.930 19:09:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:13.930 19:09:51 -- common/autotest_common.sh@10 -- # set +x 00:09:14.498 19:09:51 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:14.498 [2024-02-14 19:09:51.793595] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.498 [2024-02-14 19:09:51.793627] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b4eb500 name Existed_Raid, state configuring 00:09:14.498 19:09:51 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:14.757 [2024-02-14 19:09:51.989604] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.757 [2024-02-14 19:09:51.989656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.757 [2024-02-14 19:09:51.989660] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.757 [2024-02-14 19:09:51.989667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.757 [2024-02-14 19:09:51.989670] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.757 [2024-02-14 19:09:51.989676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.757 19:09:52 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.016 [2024-02-14 19:09:52.182898] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.016 BaseBdev1 00:09:15.016 19:09:52 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:15.016 19:09:52 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:09:15.016 19:09:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:15.016 19:09:52 -- common/autotest_common.sh@887 -- # local i 00:09:15.016 19:09:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:15.016 19:09:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:15.016 19:09:52 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:15.016 19:09:52 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.276 [ 00:09:15.276 { 00:09:15.276 "name": "BaseBdev1", 00:09:15.276 "aliases": [ 00:09:15.276 "a184f6d0-cb6c-11ee-af6b-4feeebbbadda" 00:09:15.276 ], 00:09:15.276 "product_name": "Malloc disk", 00:09:15.276 "block_size": 512, 00:09:15.276 "num_blocks": 65536, 00:09:15.276 "uuid": "a184f6d0-cb6c-11ee-af6b-4feeebbbadda", 00:09:15.276 "assigned_rate_limits": { 00:09:15.276 "rw_ios_per_sec": 0, 00:09:15.276 "rw_mbytes_per_sec": 0, 00:09:15.276 "r_mbytes_per_sec": 0, 00:09:15.276 "w_mbytes_per_sec": 0 00:09:15.276 }, 00:09:15.276 "claimed": true, 00:09:15.276 "claim_type": "exclusive_write", 00:09:15.276 "zoned": false, 00:09:15.276 "supported_io_types": { 00:09:15.276 "read": true, 00:09:15.276 "write": true, 00:09:15.276 "unmap": true, 00:09:15.276 "write_zeroes": true, 00:09:15.276 "flush": true, 00:09:15.276 "reset": true, 00:09:15.276 "compare": false, 00:09:15.276 "compare_and_write": false, 00:09:15.276 "abort": true, 00:09:15.276 "nvme_admin": false, 00:09:15.276 "nvme_io": false 00:09:15.276 }, 00:09:15.276 "memory_domains": [ 00:09:15.276 { 00:09:15.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.276 "dma_device_type": 2 00:09:15.276 } 00:09:15.276 ], 00:09:15.276 "driver_specific": {} 00:09:15.276 } 00:09:15.276 ] 00:09:15.276 19:09:52 -- common/autotest_common.sh@893 -- # return 0 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.276 19:09:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.535 19:09:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:15.535 "name": "Existed_Raid", 00:09:15.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.535 "strip_size_kb": 64, 00:09:15.535 "state": "configuring", 00:09:15.535 "raid_level": "concat", 00:09:15.535 "superblock": false, 00:09:15.535 "num_base_bdevs": 3, 00:09:15.535 "num_base_bdevs_discovered": 1, 00:09:15.535 "num_base_bdevs_operational": 3, 00:09:15.535 "base_bdevs_list": [ 00:09:15.535 { 00:09:15.535 "name": "BaseBdev1", 00:09:15.535 "uuid": "a184f6d0-cb6c-11ee-af6b-4feeebbbadda", 00:09:15.535 "is_configured": true, 00:09:15.535 "data_offset": 0, 00:09:15.535 "data_size": 65536 00:09:15.535 }, 00:09:15.535 { 00:09:15.535 "name": "BaseBdev2", 00:09:15.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.535 "is_configured": false, 00:09:15.535 "data_offset": 0, 00:09:15.535 "data_size": 0 00:09:15.535 }, 00:09:15.535 { 00:09:15.535 "name": "BaseBdev3", 00:09:15.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.535 "is_configured": false, 00:09:15.535 "data_offset": 0, 00:09:15.535 "data_size": 0 00:09:15.535 } 00:09:15.535 ] 00:09:15.535 }' 00:09:15.535 19:09:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:15.535 19:09:52 -- common/autotest_common.sh@10 -- # set +x 00:09:16.103 19:09:53 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:16.103 [2024-02-14 19:09:53.389641] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.103 [2024-02-14 19:09:53.389675] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b4eb500 name Existed_Raid, state configuring 00:09:16.103 19:09:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:16.103 19:09:53 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:16.362 [2024-02-14 19:09:53.649673] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.362 [2024-02-14 19:09:53.650642] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.362 [2024-02-14 19:09:53.650690] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.362 [2024-02-14 19:09:53.650694] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.362 [2024-02-14 19:09:53.650701] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.362 19:09:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.620 19:09:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:16.620 "name": "Existed_Raid", 00:09:16.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.620 "strip_size_kb": 64, 00:09:16.620 "state": "configuring", 00:09:16.620 "raid_level": "concat", 00:09:16.620 "superblock": false, 00:09:16.621 "num_base_bdevs": 3, 00:09:16.621 "num_base_bdevs_discovered": 1, 00:09:16.621 "num_base_bdevs_operational": 3, 00:09:16.621 "base_bdevs_list": [ 00:09:16.621 { 00:09:16.621 "name": "BaseBdev1", 00:09:16.621 "uuid": "a184f6d0-cb6c-11ee-af6b-4feeebbbadda", 00:09:16.621 "is_configured": true, 00:09:16.621 "data_offset": 0, 00:09:16.621 "data_size": 65536 00:09:16.621 }, 00:09:16.621 { 00:09:16.621 "name": "BaseBdev2", 00:09:16.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.621 "is_configured": false, 00:09:16.621 "data_offset": 0, 00:09:16.621 "data_size": 0 00:09:16.621 }, 00:09:16.621 { 00:09:16.621 "name": "BaseBdev3", 00:09:16.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.621 "is_configured": false, 00:09:16.621 "data_offset": 0, 00:09:16.621 "data_size": 0 00:09:16.621 } 00:09:16.621 ] 00:09:16.621 }' 00:09:16.621 19:09:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:16.621 19:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:16.879 19:09:54 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.139 [2024-02-14 19:09:54.429853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.139 BaseBdev2 00:09:17.139 19:09:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:17.139 19:09:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:09:17.139 19:09:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:17.139 19:09:54 -- common/autotest_common.sh@887 -- # local i 00:09:17.139 19:09:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:17.139 19:09:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:17.139 19:09:54 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.398 19:09:54 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.657 [ 00:09:17.658 { 00:09:17.658 "name": "BaseBdev2", 00:09:17.658 "aliases": [ 00:09:17.658 "a2dbfe8f-cb6c-11ee-af6b-4feeebbbadda" 00:09:17.658 ], 00:09:17.658 "product_name": "Malloc disk", 00:09:17.658 "block_size": 512, 00:09:17.658 "num_blocks": 65536, 00:09:17.658 "uuid": "a2dbfe8f-cb6c-11ee-af6b-4feeebbbadda", 00:09:17.658 "assigned_rate_limits": { 00:09:17.658 "rw_ios_per_sec": 0, 00:09:17.658 "rw_mbytes_per_sec": 0, 00:09:17.658 "r_mbytes_per_sec": 0, 00:09:17.658 "w_mbytes_per_sec": 0 00:09:17.658 }, 00:09:17.658 "claimed": true, 00:09:17.658 "claim_type": "exclusive_write", 00:09:17.658 "zoned": false, 00:09:17.658 "supported_io_types": { 00:09:17.658 "read": true, 00:09:17.658 "write": true, 00:09:17.658 "unmap": true, 00:09:17.658 "write_zeroes": true, 00:09:17.658 "flush": true, 00:09:17.658 "reset": true, 00:09:17.658 "compare": false, 00:09:17.658 "compare_and_write": false, 00:09:17.658 "abort": true, 00:09:17.658 "nvme_admin": false, 00:09:17.658 "nvme_io": false 00:09:17.658 }, 00:09:17.658 "memory_domains": [ 00:09:17.658 { 00:09:17.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.658 "dma_device_type": 2 00:09:17.658 } 00:09:17.658 ], 00:09:17.658 "driver_specific": {} 00:09:17.658 } 00:09:17.658 ] 00:09:17.658 19:09:54 -- common/autotest_common.sh@893 -- # return 0 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.658 19:09:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.917 19:09:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:17.917 "name": "Existed_Raid", 00:09:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.917 "strip_size_kb": 64, 00:09:17.917 "state": "configuring", 00:09:17.917 "raid_level": "concat", 00:09:17.917 "superblock": false, 00:09:17.917 "num_base_bdevs": 3, 00:09:17.917 "num_base_bdevs_discovered": 2, 00:09:17.917 "num_base_bdevs_operational": 3, 00:09:17.917 "base_bdevs_list": [ 00:09:17.917 { 00:09:17.917 "name": "BaseBdev1", 00:09:17.917 "uuid": "a184f6d0-cb6c-11ee-af6b-4feeebbbadda", 00:09:17.917 "is_configured": true, 00:09:17.917 "data_offset": 0, 00:09:17.917 "data_size": 65536 00:09:17.917 }, 00:09:17.917 { 00:09:17.917 "name": "BaseBdev2", 00:09:17.917 "uuid": "a2dbfe8f-cb6c-11ee-af6b-4feeebbbadda", 00:09:17.917 "is_configured": true, 00:09:17.917 "data_offset": 0, 00:09:17.917 "data_size": 65536 00:09:17.917 }, 00:09:17.917 { 00:09:17.917 "name": "BaseBdev3", 00:09:17.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.917 "is_configured": false, 00:09:17.917 "data_offset": 0, 00:09:17.917 "data_size": 0 00:09:17.917 } 00:09:17.917 ] 00:09:17.917 }' 00:09:17.917 19:09:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:17.917 19:09:55 -- common/autotest_common.sh@10 -- # set +x 00:09:18.177 19:09:55 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.442 [2024-02-14 19:09:55.717883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.442 [2024-02-14 19:09:55.717913] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b4eba00 00:09:18.442 [2024-02-14 19:09:55.717917] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:18.442 [2024-02-14 19:09:55.717941] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b54eec0 00:09:18.442 [2024-02-14 19:09:55.718051] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b4eba00 00:09:18.442 [2024-02-14 19:09:55.718054] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b4eba00 00:09:18.442 [2024-02-14 19:09:55.718085] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.442 BaseBdev3 00:09:18.442 19:09:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:18.442 19:09:55 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:09:18.442 19:09:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:18.442 19:09:55 -- common/autotest_common.sh@887 -- # local i 00:09:18.442 19:09:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:18.442 19:09:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:18.442 19:09:55 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.713 19:09:55 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.972 [ 00:09:18.972 { 00:09:18.972 "name": "BaseBdev3", 00:09:18.972 "aliases": [ 00:09:18.972 "a3a088c8-cb6c-11ee-af6b-4feeebbbadda" 00:09:18.972 ], 00:09:18.972 "product_name": "Malloc disk", 00:09:18.972 "block_size": 512, 00:09:18.972 "num_blocks": 65536, 00:09:18.972 "uuid": "a3a088c8-cb6c-11ee-af6b-4feeebbbadda", 00:09:18.972 "assigned_rate_limits": { 00:09:18.972 "rw_ios_per_sec": 0, 00:09:18.972 "rw_mbytes_per_sec": 0, 00:09:18.972 "r_mbytes_per_sec": 0, 00:09:18.972 "w_mbytes_per_sec": 0 00:09:18.972 }, 00:09:18.972 "claimed": true, 00:09:18.972 "claim_type": "exclusive_write", 00:09:18.972 "zoned": false, 00:09:18.972 "supported_io_types": { 00:09:18.972 "read": true, 00:09:18.972 "write": true, 00:09:18.972 "unmap": true, 00:09:18.972 "write_zeroes": true, 00:09:18.973 "flush": true, 00:09:18.973 "reset": true, 00:09:18.973 "compare": false, 00:09:18.973 "compare_and_write": false, 00:09:18.973 "abort": true, 00:09:18.973 "nvme_admin": false, 00:09:18.973 "nvme_io": false 00:09:18.973 }, 00:09:18.973 "memory_domains": [ 00:09:18.973 { 00:09:18.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.973 "dma_device_type": 2 00:09:18.973 } 00:09:18.973 ], 00:09:18.973 "driver_specific": {} 00:09:18.973 } 00:09:18.973 ] 00:09:18.973 19:09:56 -- common/autotest_common.sh@893 -- # return 0 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.973 19:09:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.232 19:09:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:19.232 "name": "Existed_Raid", 00:09:19.232 "uuid": "a3a08f58-cb6c-11ee-af6b-4feeebbbadda", 00:09:19.232 "strip_size_kb": 64, 00:09:19.232 "state": "online", 00:09:19.232 "raid_level": "concat", 00:09:19.232 "superblock": false, 00:09:19.232 "num_base_bdevs": 3, 00:09:19.232 "num_base_bdevs_discovered": 3, 00:09:19.232 "num_base_bdevs_operational": 3, 00:09:19.232 "base_bdevs_list": [ 00:09:19.232 { 00:09:19.232 "name": "BaseBdev1", 00:09:19.232 "uuid": "a184f6d0-cb6c-11ee-af6b-4feeebbbadda", 00:09:19.232 "is_configured": true, 00:09:19.232 "data_offset": 0, 00:09:19.232 "data_size": 65536 00:09:19.232 }, 00:09:19.232 { 00:09:19.232 "name": "BaseBdev2", 00:09:19.232 "uuid": "a2dbfe8f-cb6c-11ee-af6b-4feeebbbadda", 00:09:19.232 "is_configured": true, 00:09:19.232 "data_offset": 0, 00:09:19.232 "data_size": 65536 00:09:19.232 }, 00:09:19.232 { 00:09:19.232 "name": "BaseBdev3", 00:09:19.232 "uuid": "a3a088c8-cb6c-11ee-af6b-4feeebbbadda", 00:09:19.232 "is_configured": true, 00:09:19.232 "data_offset": 0, 00:09:19.232 "data_size": 65536 00:09:19.232 } 00:09:19.232 ] 00:09:19.232 }' 00:09:19.232 19:09:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:19.232 19:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:19.491 19:09:56 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:19.751 [2024-02-14 19:09:56.969790] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.751 [2024-02-14 19:09:56.969821] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.751 [2024-02-14 19:09:56.969834] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:19.751 19:09:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:19.751 19:09:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.751 19:09:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.010 19:09:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:20.010 "name": "Existed_Raid", 00:09:20.010 "uuid": "a3a08f58-cb6c-11ee-af6b-4feeebbbadda", 00:09:20.010 "strip_size_kb": 64, 00:09:20.010 "state": "offline", 00:09:20.010 "raid_level": "concat", 00:09:20.010 "superblock": false, 00:09:20.010 "num_base_bdevs": 3, 00:09:20.010 "num_base_bdevs_discovered": 2, 00:09:20.010 "num_base_bdevs_operational": 2, 00:09:20.010 "base_bdevs_list": [ 00:09:20.010 { 00:09:20.010 "name": null, 00:09:20.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.010 "is_configured": false, 00:09:20.010 "data_offset": 0, 00:09:20.010 "data_size": 65536 00:09:20.010 }, 00:09:20.010 { 00:09:20.010 "name": "BaseBdev2", 00:09:20.010 "uuid": "a2dbfe8f-cb6c-11ee-af6b-4feeebbbadda", 00:09:20.010 "is_configured": true, 00:09:20.010 "data_offset": 0, 00:09:20.010 "data_size": 65536 00:09:20.010 }, 00:09:20.010 { 00:09:20.010 "name": "BaseBdev3", 00:09:20.010 "uuid": "a3a088c8-cb6c-11ee-af6b-4feeebbbadda", 00:09:20.010 "is_configured": true, 00:09:20.010 "data_offset": 0, 00:09:20.010 "data_size": 65536 00:09:20.010 } 00:09:20.010 ] 00:09:20.010 }' 00:09:20.010 19:09:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:20.010 19:09:57 -- common/autotest_common.sh@10 -- # set +x 00:09:20.270 19:09:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:20.270 19:09:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:20.270 19:09:57 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.270 19:09:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:20.529 19:09:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:20.529 19:09:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.529 19:09:57 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:20.788 [2024-02-14 19:09:57.974863] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.788 19:09:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:20.788 19:09:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:20.788 19:09:58 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.788 19:09:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:21.046 19:09:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:21.046 19:09:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.046 19:09:58 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:21.304 [2024-02-14 19:09:58.567829] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.304 [2024-02-14 19:09:58.567862] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b4eba00 name Existed_Raid, state offline 00:09:21.304 19:09:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:21.304 19:09:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.304 19:09:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.304 19:09:58 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.563 19:09:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:21.563 19:09:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:21.563 19:09:58 -- bdev/bdev_raid.sh@287 -- # killprocess 51075 00:09:21.563 19:09:58 -- common/autotest_common.sh@924 -- # '[' -z 51075 ']' 00:09:21.563 19:09:58 -- common/autotest_common.sh@928 -- # kill -0 51075 00:09:21.563 19:09:58 -- common/autotest_common.sh@929 -- # uname 00:09:21.563 19:09:58 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:09:21.563 19:09:58 -- common/autotest_common.sh@932 -- # ps -c -o command 51075 00:09:21.563 19:09:58 -- common/autotest_common.sh@932 -- # tail -1 00:09:21.563 19:09:58 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:09:21.563 19:09:58 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:09:21.563 killing process with pid 51075 00:09:21.563 19:09:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 51075' 00:09:21.563 19:09:58 -- common/autotest_common.sh@943 -- # kill 51075 00:09:21.563 [2024-02-14 19:09:58.877311] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.563 [2024-02-14 19:09:58.877366] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.563 19:09:58 -- common/autotest_common.sh@948 -- # wait 51075 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:21.822 00:09:21.822 real 0m9.320s 00:09:21.822 user 0m15.755s 00:09:21.822 sys 0m2.051s 00:09:21.822 19:09:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.822 ************************************ 00:09:21.822 END TEST raid_state_function_test 00:09:21.822 ************************************ 00:09:21.822 19:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:21.822 19:09:59 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:09:21.822 19:09:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:21.822 19:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:21.822 ************************************ 00:09:21.822 START TEST raid_state_function_test_sb 00:09:21.822 ************************************ 00:09:21.822 19:09:59 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 3 true 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=51308 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51308' 00:09:21.822 Process raid pid: 51308 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51308 /var/tmp/spdk-raid.sock 00:09:21.822 19:09:59 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:21.822 19:09:59 -- common/autotest_common.sh@817 -- # '[' -z 51308 ']' 00:09:21.822 19:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:21.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:21.822 19:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:21.822 19:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:21.822 19:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:21.822 19:09:59 -- common/autotest_common.sh@10 -- # set +x 00:09:21.822 [2024-02-14 19:09:59.173435] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:21.822 [2024-02-14 19:09:59.173663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:22.757 EAL: TSC is not safe to use in SMP mode 00:09:22.757 EAL: TSC is not invariant 00:09:22.757 [2024-02-14 19:09:59.948350] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.757 [2024-02-14 19:10:00.066210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.757 [2024-02-14 19:10:00.066728] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.757 [2024-02-14 19:10:00.066732] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.016 19:10:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.016 19:10:00 -- common/autotest_common.sh@850 -- # return 0 00:09:23.016 19:10:00 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:23.274 [2024-02-14 19:10:00.450145] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.274 [2024-02-14 19:10:00.450234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.274 [2024-02-14 19:10:00.450240] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.274 [2024-02-14 19:10:00.450248] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.274 [2024-02-14 19:10:00.450252] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.274 [2024-02-14 19:10:00.450259] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.274 19:10:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.532 19:10:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:23.532 "name": "Existed_Raid", 00:09:23.532 "uuid": "a672a382-cb6c-11ee-af6b-4feeebbbadda", 00:09:23.532 "strip_size_kb": 64, 00:09:23.532 "state": "configuring", 00:09:23.532 "raid_level": "concat", 00:09:23.532 "superblock": true, 00:09:23.532 "num_base_bdevs": 3, 00:09:23.532 "num_base_bdevs_discovered": 0, 00:09:23.532 "num_base_bdevs_operational": 3, 00:09:23.532 "base_bdevs_list": [ 00:09:23.532 { 00:09:23.532 "name": "BaseBdev1", 00:09:23.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.532 "is_configured": false, 00:09:23.532 "data_offset": 0, 00:09:23.532 "data_size": 0 00:09:23.532 }, 00:09:23.532 { 00:09:23.532 "name": "BaseBdev2", 00:09:23.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.532 "is_configured": false, 00:09:23.532 "data_offset": 0, 00:09:23.532 "data_size": 0 00:09:23.532 }, 00:09:23.532 { 00:09:23.532 "name": "BaseBdev3", 00:09:23.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.532 "is_configured": false, 00:09:23.532 "data_offset": 0, 00:09:23.532 "data_size": 0 00:09:23.532 } 00:09:23.532 ] 00:09:23.532 }' 00:09:23.532 19:10:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:23.532 19:10:00 -- common/autotest_common.sh@10 -- # set +x 00:09:23.791 19:10:01 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:24.049 [2024-02-14 19:10:01.382173] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.049 [2024-02-14 19:10:01.382202] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ab96500 name Existed_Raid, state configuring 00:09:24.049 19:10:01 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:24.308 [2024-02-14 19:10:01.614201] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.308 [2024-02-14 19:10:01.614252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.308 [2024-02-14 19:10:01.614256] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.308 [2024-02-14 19:10:01.614263] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.308 [2024-02-14 19:10:01.614266] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.308 [2024-02-14 19:10:01.614273] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.308 19:10:01 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.566 [2024-02-14 19:10:01.887465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.566 BaseBdev1 00:09:24.566 19:10:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:24.566 19:10:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:09:24.566 19:10:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:24.566 19:10:01 -- common/autotest_common.sh@887 -- # local i 00:09:24.566 19:10:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:24.566 19:10:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:24.566 19:10:01 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:24.824 19:10:02 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.083 [ 00:09:25.083 { 00:09:25.083 "name": "BaseBdev1", 00:09:25.083 "aliases": [ 00:09:25.083 "a74dc421-cb6c-11ee-af6b-4feeebbbadda" 00:09:25.083 ], 00:09:25.083 "product_name": "Malloc disk", 00:09:25.083 "block_size": 512, 00:09:25.083 "num_blocks": 65536, 00:09:25.083 "uuid": "a74dc421-cb6c-11ee-af6b-4feeebbbadda", 00:09:25.083 "assigned_rate_limits": { 00:09:25.083 "rw_ios_per_sec": 0, 00:09:25.083 "rw_mbytes_per_sec": 0, 00:09:25.083 "r_mbytes_per_sec": 0, 00:09:25.083 "w_mbytes_per_sec": 0 00:09:25.083 }, 00:09:25.083 "claimed": true, 00:09:25.083 "claim_type": "exclusive_write", 00:09:25.083 "zoned": false, 00:09:25.083 "supported_io_types": { 00:09:25.083 "read": true, 00:09:25.083 "write": true, 00:09:25.083 "unmap": true, 00:09:25.083 "write_zeroes": true, 00:09:25.083 "flush": true, 00:09:25.083 "reset": true, 00:09:25.083 "compare": false, 00:09:25.083 "compare_and_write": false, 00:09:25.083 "abort": true, 00:09:25.083 "nvme_admin": false, 00:09:25.083 "nvme_io": false 00:09:25.083 }, 00:09:25.083 "memory_domains": [ 00:09:25.083 { 00:09:25.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.083 "dma_device_type": 2 00:09:25.083 } 00:09:25.083 ], 00:09:25.083 "driver_specific": {} 00:09:25.083 } 00:09:25.083 ] 00:09:25.083 19:10:02 -- common/autotest_common.sh@893 -- # return 0 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.083 19:10:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.341 19:10:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:25.341 "name": "Existed_Raid", 00:09:25.341 "uuid": "a7244280-cb6c-11ee-af6b-4feeebbbadda", 00:09:25.341 "strip_size_kb": 64, 00:09:25.341 "state": "configuring", 00:09:25.341 "raid_level": "concat", 00:09:25.341 "superblock": true, 00:09:25.341 "num_base_bdevs": 3, 00:09:25.341 "num_base_bdevs_discovered": 1, 00:09:25.341 "num_base_bdevs_operational": 3, 00:09:25.341 "base_bdevs_list": [ 00:09:25.341 { 00:09:25.341 "name": "BaseBdev1", 00:09:25.341 "uuid": "a74dc421-cb6c-11ee-af6b-4feeebbbadda", 00:09:25.341 "is_configured": true, 00:09:25.341 "data_offset": 2048, 00:09:25.341 "data_size": 63488 00:09:25.341 }, 00:09:25.341 { 00:09:25.341 "name": "BaseBdev2", 00:09:25.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.341 "is_configured": false, 00:09:25.341 "data_offset": 0, 00:09:25.341 "data_size": 0 00:09:25.341 }, 00:09:25.341 { 00:09:25.342 "name": "BaseBdev3", 00:09:25.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.342 "is_configured": false, 00:09:25.342 "data_offset": 0, 00:09:25.342 "data_size": 0 00:09:25.342 } 00:09:25.342 ] 00:09:25.342 }' 00:09:25.342 19:10:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:25.342 19:10:02 -- common/autotest_common.sh@10 -- # set +x 00:09:25.609 19:10:02 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:25.882 [2024-02-14 19:10:03.170400] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.882 [2024-02-14 19:10:03.170448] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ab96500 name Existed_Raid, state configuring 00:09:25.882 19:10:03 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:25.882 19:10:03 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:26.140 19:10:03 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.398 BaseBdev1 00:09:26.398 19:10:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:26.398 19:10:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:09:26.398 19:10:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:26.398 19:10:03 -- common/autotest_common.sh@887 -- # local i 00:09:26.398 19:10:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:26.398 19:10:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:26.398 19:10:03 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:26.657 19:10:04 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.916 [ 00:09:26.916 { 00:09:26.916 "name": "BaseBdev1", 00:09:26.916 "aliases": [ 00:09:26.916 "a866c68d-cb6c-11ee-af6b-4feeebbbadda" 00:09:26.916 ], 00:09:26.916 "product_name": "Malloc disk", 00:09:26.916 "block_size": 512, 00:09:26.916 "num_blocks": 65536, 00:09:26.916 "uuid": "a866c68d-cb6c-11ee-af6b-4feeebbbadda", 00:09:26.916 "assigned_rate_limits": { 00:09:26.916 "rw_ios_per_sec": 0, 00:09:26.916 "rw_mbytes_per_sec": 0, 00:09:26.916 "r_mbytes_per_sec": 0, 00:09:26.916 "w_mbytes_per_sec": 0 00:09:26.916 }, 00:09:26.916 "claimed": false, 00:09:26.916 "zoned": false, 00:09:26.916 "supported_io_types": { 00:09:26.916 "read": true, 00:09:26.916 "write": true, 00:09:26.916 "unmap": true, 00:09:26.916 "write_zeroes": true, 00:09:26.916 "flush": true, 00:09:26.916 "reset": true, 00:09:26.916 "compare": false, 00:09:26.916 "compare_and_write": false, 00:09:26.916 "abort": true, 00:09:26.916 "nvme_admin": false, 00:09:26.916 "nvme_io": false 00:09:26.916 }, 00:09:26.916 "memory_domains": [ 00:09:26.916 { 00:09:26.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.916 "dma_device_type": 2 00:09:26.916 } 00:09:26.916 ], 00:09:26.916 "driver_specific": {} 00:09:26.916 } 00:09:26.916 ] 00:09:26.916 19:10:04 -- common/autotest_common.sh@893 -- # return 0 00:09:26.916 19:10:04 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:27.175 [2024-02-14 19:10:04.479922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.175 [2024-02-14 19:10:04.480654] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.175 [2024-02-14 19:10:04.480696] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.175 [2024-02-14 19:10:04.480701] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:27.175 [2024-02-14 19:10:04.480709] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.175 19:10:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.433 19:10:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:27.433 "name": "Existed_Raid", 00:09:27.433 "uuid": "a8d988bf-cb6c-11ee-af6b-4feeebbbadda", 00:09:27.433 "strip_size_kb": 64, 00:09:27.433 "state": "configuring", 00:09:27.433 "raid_level": "concat", 00:09:27.433 "superblock": true, 00:09:27.433 "num_base_bdevs": 3, 00:09:27.433 "num_base_bdevs_discovered": 1, 00:09:27.433 "num_base_bdevs_operational": 3, 00:09:27.433 "base_bdevs_list": [ 00:09:27.433 { 00:09:27.433 "name": "BaseBdev1", 00:09:27.433 "uuid": "a866c68d-cb6c-11ee-af6b-4feeebbbadda", 00:09:27.433 "is_configured": true, 00:09:27.433 "data_offset": 2048, 00:09:27.433 "data_size": 63488 00:09:27.433 }, 00:09:27.433 { 00:09:27.433 "name": "BaseBdev2", 00:09:27.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.433 "is_configured": false, 00:09:27.433 "data_offset": 0, 00:09:27.433 "data_size": 0 00:09:27.433 }, 00:09:27.433 { 00:09:27.433 "name": "BaseBdev3", 00:09:27.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.433 "is_configured": false, 00:09:27.433 "data_offset": 0, 00:09:27.433 "data_size": 0 00:09:27.433 } 00:09:27.433 ] 00:09:27.433 }' 00:09:27.433 19:10:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:27.433 19:10:04 -- common/autotest_common.sh@10 -- # set +x 00:09:27.692 19:10:05 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.950 [2024-02-14 19:10:05.216173] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.950 BaseBdev2 00:09:27.950 19:10:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:27.950 19:10:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:09:27.950 19:10:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:27.950 19:10:05 -- common/autotest_common.sh@887 -- # local i 00:09:27.950 19:10:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:27.950 19:10:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:27.950 19:10:05 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:28.209 19:10:05 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.467 [ 00:09:28.467 { 00:09:28.467 "name": "BaseBdev2", 00:09:28.467 "aliases": [ 00:09:28.467 "a949da5d-cb6c-11ee-af6b-4feeebbbadda" 00:09:28.467 ], 00:09:28.467 "product_name": "Malloc disk", 00:09:28.467 "block_size": 512, 00:09:28.467 "num_blocks": 65536, 00:09:28.467 "uuid": "a949da5d-cb6c-11ee-af6b-4feeebbbadda", 00:09:28.467 "assigned_rate_limits": { 00:09:28.467 "rw_ios_per_sec": 0, 00:09:28.467 "rw_mbytes_per_sec": 0, 00:09:28.467 "r_mbytes_per_sec": 0, 00:09:28.467 "w_mbytes_per_sec": 0 00:09:28.467 }, 00:09:28.467 "claimed": true, 00:09:28.467 "claim_type": "exclusive_write", 00:09:28.467 "zoned": false, 00:09:28.468 "supported_io_types": { 00:09:28.468 "read": true, 00:09:28.468 "write": true, 00:09:28.468 "unmap": true, 00:09:28.468 "write_zeroes": true, 00:09:28.468 "flush": true, 00:09:28.468 "reset": true, 00:09:28.468 "compare": false, 00:09:28.468 "compare_and_write": false, 00:09:28.468 "abort": true, 00:09:28.468 "nvme_admin": false, 00:09:28.468 "nvme_io": false 00:09:28.468 }, 00:09:28.468 "memory_domains": [ 00:09:28.468 { 00:09:28.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.468 "dma_device_type": 2 00:09:28.468 } 00:09:28.468 ], 00:09:28.468 "driver_specific": {} 00:09:28.468 } 00:09:28.468 ] 00:09:28.468 19:10:05 -- common/autotest_common.sh@893 -- # return 0 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.468 19:10:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.726 19:10:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:28.726 "name": "Existed_Raid", 00:09:28.726 "uuid": "a8d988bf-cb6c-11ee-af6b-4feeebbbadda", 00:09:28.726 "strip_size_kb": 64, 00:09:28.726 "state": "configuring", 00:09:28.726 "raid_level": "concat", 00:09:28.726 "superblock": true, 00:09:28.726 "num_base_bdevs": 3, 00:09:28.726 "num_base_bdevs_discovered": 2, 00:09:28.726 "num_base_bdevs_operational": 3, 00:09:28.726 "base_bdevs_list": [ 00:09:28.726 { 00:09:28.726 "name": "BaseBdev1", 00:09:28.726 "uuid": "a866c68d-cb6c-11ee-af6b-4feeebbbadda", 00:09:28.726 "is_configured": true, 00:09:28.726 "data_offset": 2048, 00:09:28.726 "data_size": 63488 00:09:28.726 }, 00:09:28.726 { 00:09:28.726 "name": "BaseBdev2", 00:09:28.726 "uuid": "a949da5d-cb6c-11ee-af6b-4feeebbbadda", 00:09:28.726 "is_configured": true, 00:09:28.726 "data_offset": 2048, 00:09:28.726 "data_size": 63488 00:09:28.726 }, 00:09:28.726 { 00:09:28.726 "name": "BaseBdev3", 00:09:28.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.726 "is_configured": false, 00:09:28.726 "data_offset": 0, 00:09:28.726 "data_size": 0 00:09:28.726 } 00:09:28.726 ] 00:09:28.726 }' 00:09:28.726 19:10:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:28.726 19:10:05 -- common/autotest_common.sh@10 -- # set +x 00:09:28.985 19:10:06 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:29.243 [2024-02-14 19:10:06.536215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.243 [2024-02-14 19:10:06.536280] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ab96a00 00:09:29.243 [2024-02-14 19:10:06.536285] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:29.243 [2024-02-14 19:10:06.536303] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82abf9ec0 00:09:29.243 [2024-02-14 19:10:06.536347] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ab96a00 00:09:29.243 [2024-02-14 19:10:06.536351] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ab96a00 00:09:29.243 [2024-02-14 19:10:06.536367] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.243 BaseBdev3 00:09:29.243 19:10:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:29.243 19:10:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:09:29.243 19:10:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:29.243 19:10:06 -- common/autotest_common.sh@887 -- # local i 00:09:29.243 19:10:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:29.243 19:10:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:29.243 19:10:06 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:29.501 19:10:06 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:29.760 [ 00:09:29.760 { 00:09:29.760 "name": "BaseBdev3", 00:09:29.760 "aliases": [ 00:09:29.760 "aa1348e3-cb6c-11ee-af6b-4feeebbbadda" 00:09:29.760 ], 00:09:29.760 "product_name": "Malloc disk", 00:09:29.760 "block_size": 512, 00:09:29.760 "num_blocks": 65536, 00:09:29.760 "uuid": "aa1348e3-cb6c-11ee-af6b-4feeebbbadda", 00:09:29.760 "assigned_rate_limits": { 00:09:29.760 "rw_ios_per_sec": 0, 00:09:29.760 "rw_mbytes_per_sec": 0, 00:09:29.760 "r_mbytes_per_sec": 0, 00:09:29.760 "w_mbytes_per_sec": 0 00:09:29.760 }, 00:09:29.760 "claimed": true, 00:09:29.760 "claim_type": "exclusive_write", 00:09:29.760 "zoned": false, 00:09:29.760 "supported_io_types": { 00:09:29.760 "read": true, 00:09:29.760 "write": true, 00:09:29.760 "unmap": true, 00:09:29.760 "write_zeroes": true, 00:09:29.760 "flush": true, 00:09:29.760 "reset": true, 00:09:29.760 "compare": false, 00:09:29.760 "compare_and_write": false, 00:09:29.760 "abort": true, 00:09:29.760 "nvme_admin": false, 00:09:29.760 "nvme_io": false 00:09:29.760 }, 00:09:29.760 "memory_domains": [ 00:09:29.760 { 00:09:29.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.760 "dma_device_type": 2 00:09:29.760 } 00:09:29.760 ], 00:09:29.760 "driver_specific": {} 00:09:29.760 } 00:09:29.760 ] 00:09:29.760 19:10:07 -- common/autotest_common.sh@893 -- # return 0 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.760 19:10:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.019 19:10:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:30.019 "name": "Existed_Raid", 00:09:30.019 "uuid": "a8d988bf-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.019 "strip_size_kb": 64, 00:09:30.019 "state": "online", 00:09:30.019 "raid_level": "concat", 00:09:30.019 "superblock": true, 00:09:30.019 "num_base_bdevs": 3, 00:09:30.019 "num_base_bdevs_discovered": 3, 00:09:30.019 "num_base_bdevs_operational": 3, 00:09:30.019 "base_bdevs_list": [ 00:09:30.019 { 00:09:30.019 "name": "BaseBdev1", 00:09:30.019 "uuid": "a866c68d-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.019 "is_configured": true, 00:09:30.019 "data_offset": 2048, 00:09:30.019 "data_size": 63488 00:09:30.019 }, 00:09:30.019 { 00:09:30.019 "name": "BaseBdev2", 00:09:30.019 "uuid": "a949da5d-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.019 "is_configured": true, 00:09:30.019 "data_offset": 2048, 00:09:30.019 "data_size": 63488 00:09:30.019 }, 00:09:30.019 { 00:09:30.019 "name": "BaseBdev3", 00:09:30.019 "uuid": "aa1348e3-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.019 "is_configured": true, 00:09:30.019 "data_offset": 2048, 00:09:30.019 "data_size": 63488 00:09:30.019 } 00:09:30.019 ] 00:09:30.019 }' 00:09:30.019 19:10:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:30.019 19:10:07 -- common/autotest_common.sh@10 -- # set +x 00:09:30.277 19:10:07 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:30.536 [2024-02-14 19:10:07.868207] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.536 [2024-02-14 19:10:07.868231] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.536 [2024-02-14 19:10:07.868244] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.536 19:10:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.794 19:10:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:30.794 "name": "Existed_Raid", 00:09:30.794 "uuid": "a8d988bf-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.794 "strip_size_kb": 64, 00:09:30.794 "state": "offline", 00:09:30.794 "raid_level": "concat", 00:09:30.794 "superblock": true, 00:09:30.794 "num_base_bdevs": 3, 00:09:30.794 "num_base_bdevs_discovered": 2, 00:09:30.794 "num_base_bdevs_operational": 2, 00:09:30.794 "base_bdevs_list": [ 00:09:30.794 { 00:09:30.795 "name": null, 00:09:30.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.795 "is_configured": false, 00:09:30.795 "data_offset": 2048, 00:09:30.795 "data_size": 63488 00:09:30.795 }, 00:09:30.795 { 00:09:30.795 "name": "BaseBdev2", 00:09:30.795 "uuid": "a949da5d-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.795 "is_configured": true, 00:09:30.795 "data_offset": 2048, 00:09:30.795 "data_size": 63488 00:09:30.795 }, 00:09:30.795 { 00:09:30.795 "name": "BaseBdev3", 00:09:30.795 "uuid": "aa1348e3-cb6c-11ee-af6b-4feeebbbadda", 00:09:30.795 "is_configured": true, 00:09:30.795 "data_offset": 2048, 00:09:30.795 "data_size": 63488 00:09:30.795 } 00:09:30.795 ] 00:09:30.795 }' 00:09:30.795 19:10:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:30.795 19:10:08 -- common/autotest_common.sh@10 -- # set +x 00:09:31.053 19:10:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:31.053 19:10:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:31.053 19:10:08 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.053 19:10:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:31.311 19:10:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:31.311 19:10:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.311 19:10:08 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:31.570 [2024-02-14 19:10:08.777370] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:31.570 19:10:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:31.570 19:10:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:31.570 19:10:08 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.570 19:10:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:31.829 19:10:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:31.829 19:10:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:31.829 19:10:09 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:32.087 [2024-02-14 19:10:09.314512] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.087 [2024-02-14 19:10:09.314549] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ab96a00 name Existed_Raid, state offline 00:09:32.087 19:10:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:32.087 19:10:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:32.087 19:10:09 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.087 19:10:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.346 19:10:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:32.346 19:10:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:32.346 19:10:09 -- bdev/bdev_raid.sh@287 -- # killprocess 51308 00:09:32.346 19:10:09 -- common/autotest_common.sh@924 -- # '[' -z 51308 ']' 00:09:32.346 19:10:09 -- common/autotest_common.sh@928 -- # kill -0 51308 00:09:32.346 19:10:09 -- common/autotest_common.sh@929 -- # uname 00:09:32.346 19:10:09 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:09:32.346 19:10:09 -- common/autotest_common.sh@932 -- # tail -1 00:09:32.346 19:10:09 -- common/autotest_common.sh@932 -- # ps -c -o command 51308 00:09:32.346 19:10:09 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:09:32.346 killing process with pid 51308 00:09:32.346 19:10:09 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:09:32.346 19:10:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 51308' 00:09:32.346 19:10:09 -- common/autotest_common.sh@943 -- # kill 51308 00:09:32.346 [2024-02-14 19:10:09.602939] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.346 19:10:09 -- common/autotest_common.sh@948 -- # wait 51308 00:09:32.346 [2024-02-14 19:10:09.602989] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:32.605 00:09:32.605 real 0m10.705s 00:09:32.605 user 0m18.318s 00:09:32.605 sys 0m2.220s 00:09:32.605 19:10:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.605 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:09:32.605 ************************************ 00:09:32.605 END TEST raid_state_function_test_sb 00:09:32.605 ************************************ 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:32.605 19:10:09 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:09:32.605 19:10:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:32.605 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:09:32.605 ************************************ 00:09:32.605 START TEST raid_superblock_test 00:09:32.605 ************************************ 00:09:32.605 19:10:09 -- common/autotest_common.sh@1102 -- # raid_superblock_test concat 3 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=51544 00:09:32.605 19:10:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51544 /var/tmp/spdk-raid.sock 00:09:32.605 19:10:09 -- common/autotest_common.sh@817 -- # '[' -z 51544 ']' 00:09:32.605 19:10:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:32.605 19:10:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:32.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:32.605 19:10:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:32.605 19:10:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:32.605 19:10:09 -- common/autotest_common.sh@10 -- # set +x 00:09:32.605 [2024-02-14 19:10:09.914194] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:32.605 [2024-02-14 19:10:09.914398] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:33.540 EAL: TSC is not safe to use in SMP mode 00:09:33.540 EAL: TSC is not invariant 00:09:33.540 [2024-02-14 19:10:10.694310] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.540 [2024-02-14 19:10:10.818582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.540 [2024-02-14 19:10:10.819347] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.540 [2024-02-14 19:10:10.819364] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.540 19:10:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:33.540 19:10:10 -- common/autotest_common.sh@850 -- # return 0 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.540 19:10:10 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:33.799 malloc1 00:09:33.799 19:10:11 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.058 [2024-02-14 19:10:11.374336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.058 [2024-02-14 19:10:11.374395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.058 [2024-02-14 19:10:11.375052] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e7780 00:09:34.058 [2024-02-14 19:10:11.375092] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.058 [2024-02-14 19:10:11.376200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.058 [2024-02-14 19:10:11.376231] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.058 pt1 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.058 19:10:11 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:34.317 malloc2 00:09:34.317 19:10:11 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:34.577 [2024-02-14 19:10:11.894410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:34.577 [2024-02-14 19:10:11.894478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.577 [2024-02-14 19:10:11.894541] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e7c80 00:09:34.577 [2024-02-14 19:10:11.894549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.577 [2024-02-14 19:10:11.895376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.577 [2024-02-14 19:10:11.895406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:34.577 pt2 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:34.577 19:10:11 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:34.847 malloc3 00:09:34.847 19:10:12 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.106 [2024-02-14 19:10:12.394466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.106 [2024-02-14 19:10:12.394534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.106 [2024-02-14 19:10:12.394569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e8180 00:09:35.106 [2024-02-14 19:10:12.394576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.106 [2024-02-14 19:10:12.395449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.106 [2024-02-14 19:10:12.395480] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.106 pt3 00:09:35.106 19:10:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:35.106 19:10:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:35.106 19:10:12 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:35.364 [2024-02-14 19:10:12.618501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.364 [2024-02-14 19:10:12.619250] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.364 [2024-02-14 19:10:12.619273] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.364 [2024-02-14 19:10:12.619332] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x8298e8400 00:09:35.364 [2024-02-14 19:10:12.619337] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.364 [2024-02-14 19:10:12.619374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82994ae20 00:09:35.364 [2024-02-14 19:10:12.619454] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8298e8400 00:09:35.364 [2024-02-14 19:10:12.619458] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8298e8400 00:09:35.364 [2024-02-14 19:10:12.619484] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.364 19:10:12 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:35.364 19:10:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:35.364 19:10:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:35.364 19:10:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:35.364 19:10:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:35.364 19:10:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:35.365 19:10:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:35.365 19:10:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:35.365 19:10:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:35.365 19:10:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:35.365 19:10:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.365 19:10:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.623 19:10:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:35.623 "name": "raid_bdev1", 00:09:35.623 "uuid": "adb361cc-cb6c-11ee-af6b-4feeebbbadda", 00:09:35.623 "strip_size_kb": 64, 00:09:35.623 "state": "online", 00:09:35.623 "raid_level": "concat", 00:09:35.623 "superblock": true, 00:09:35.623 "num_base_bdevs": 3, 00:09:35.623 "num_base_bdevs_discovered": 3, 00:09:35.623 "num_base_bdevs_operational": 3, 00:09:35.623 "base_bdevs_list": [ 00:09:35.623 { 00:09:35.623 "name": "pt1", 00:09:35.623 "uuid": "a2e2bd40-9f25-145b-b291-ce2ec8b80518", 00:09:35.623 "is_configured": true, 00:09:35.623 "data_offset": 2048, 00:09:35.623 "data_size": 63488 00:09:35.623 }, 00:09:35.623 { 00:09:35.623 "name": "pt2", 00:09:35.623 "uuid": "4dd87dfd-51f6-a451-bafe-fc30946518b7", 00:09:35.623 "is_configured": true, 00:09:35.623 "data_offset": 2048, 00:09:35.623 "data_size": 63488 00:09:35.623 }, 00:09:35.623 { 00:09:35.623 "name": "pt3", 00:09:35.623 "uuid": "4ab8f0e2-3574-7c50-88c4-dd4f61ffdf20", 00:09:35.623 "is_configured": true, 00:09:35.623 "data_offset": 2048, 00:09:35.623 "data_size": 63488 00:09:35.623 } 00:09:35.623 ] 00:09:35.623 }' 00:09:35.623 19:10:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:35.623 19:10:12 -- common/autotest_common.sh@10 -- # set +x 00:09:35.883 19:10:13 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:35.883 19:10:13 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:35.883 [2024-02-14 19:10:13.298539] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.141 19:10:13 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=adb361cc-cb6c-11ee-af6b-4feeebbbadda 00:09:36.141 19:10:13 -- bdev/bdev_raid.sh@380 -- # '[' -z adb361cc-cb6c-11ee-af6b-4feeebbbadda ']' 00:09:36.141 19:10:13 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:36.399 [2024-02-14 19:10:13.570535] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.399 [2024-02-14 19:10:13.570561] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.399 [2024-02-14 19:10:13.570586] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.399 [2024-02-14 19:10:13.570603] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.399 [2024-02-14 19:10:13.570608] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8298e8400 name raid_bdev1, state offline 00:09:36.399 19:10:13 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.399 19:10:13 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:36.399 19:10:13 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:36.399 19:10:13 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:36.399 19:10:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.399 19:10:13 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:36.658 19:10:14 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.658 19:10:14 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:36.917 19:10:14 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:36.917 19:10:14 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:37.175 19:10:14 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:37.175 19:10:14 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:37.434 19:10:14 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:37.434 19:10:14 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:37.434 19:10:14 -- common/autotest_common.sh@638 -- # local es=0 00:09:37.434 19:10:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:37.434 19:10:14 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.434 19:10:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:37.434 19:10:14 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.434 19:10:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:37.434 19:10:14 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.434 19:10:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:37.434 19:10:14 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.434 19:10:14 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:37.434 19:10:14 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:37.693 [2024-02-14 19:10:14.994731] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:37.693 [2024-02-14 19:10:14.995483] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:37.693 [2024-02-14 19:10:14.995504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:37.693 [2024-02-14 19:10:14.995520] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:37.693 [2024-02-14 19:10:14.995561] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:37.693 [2024-02-14 19:10:14.995570] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:37.693 [2024-02-14 19:10:14.995578] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:37.693 [2024-02-14 19:10:14.995582] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8298e8180 name raid_bdev1, state configuring 00:09:37.693 request: 00:09:37.693 { 00:09:37.693 "name": "raid_bdev1", 00:09:37.693 "raid_level": "concat", 00:09:37.693 "base_bdevs": [ 00:09:37.693 "malloc1", 00:09:37.693 "malloc2", 00:09:37.693 "malloc3" 00:09:37.693 ], 00:09:37.693 "superblock": false, 00:09:37.693 "strip_size_kb": 64, 00:09:37.693 "method": "bdev_raid_create", 00:09:37.693 "req_id": 1 00:09:37.693 } 00:09:37.693 Got JSON-RPC error response 00:09:37.693 response: 00:09:37.693 { 00:09:37.693 "code": -17, 00:09:37.693 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:37.693 } 00:09:37.693 19:10:15 -- common/autotest_common.sh@641 -- # es=1 00:09:37.693 19:10:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:37.693 19:10:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:37.693 19:10:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:37.693 19:10:15 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.693 19:10:15 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:37.951 19:10:15 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:37.951 19:10:15 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:37.951 19:10:15 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.210 [2024-02-14 19:10:15.482740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.210 [2024-02-14 19:10:15.482811] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.210 [2024-02-14 19:10:15.482846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e7c80 00:09:38.210 [2024-02-14 19:10:15.482854] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.210 [2024-02-14 19:10:15.483739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.210 [2024-02-14 19:10:15.483766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.210 [2024-02-14 19:10:15.483793] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:38.210 [2024-02-14 19:10:15.483806] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.210 pt1 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.210 19:10:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.470 19:10:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:38.470 "name": "raid_bdev1", 00:09:38.470 "uuid": "adb361cc-cb6c-11ee-af6b-4feeebbbadda", 00:09:38.470 "strip_size_kb": 64, 00:09:38.470 "state": "configuring", 00:09:38.470 "raid_level": "concat", 00:09:38.470 "superblock": true, 00:09:38.470 "num_base_bdevs": 3, 00:09:38.470 "num_base_bdevs_discovered": 1, 00:09:38.470 "num_base_bdevs_operational": 3, 00:09:38.470 "base_bdevs_list": [ 00:09:38.470 { 00:09:38.470 "name": "pt1", 00:09:38.470 "uuid": "a2e2bd40-9f25-145b-b291-ce2ec8b80518", 00:09:38.470 "is_configured": true, 00:09:38.470 "data_offset": 2048, 00:09:38.470 "data_size": 63488 00:09:38.470 }, 00:09:38.470 { 00:09:38.470 "name": null, 00:09:38.470 "uuid": "4dd87dfd-51f6-a451-bafe-fc30946518b7", 00:09:38.470 "is_configured": false, 00:09:38.470 "data_offset": 2048, 00:09:38.470 "data_size": 63488 00:09:38.470 }, 00:09:38.470 { 00:09:38.470 "name": null, 00:09:38.470 "uuid": "4ab8f0e2-3574-7c50-88c4-dd4f61ffdf20", 00:09:38.470 "is_configured": false, 00:09:38.470 "data_offset": 2048, 00:09:38.470 "data_size": 63488 00:09:38.470 } 00:09:38.470 ] 00:09:38.470 }' 00:09:38.470 19:10:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:38.470 19:10:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.728 19:10:16 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:09:38.729 19:10:16 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.987 [2024-02-14 19:10:16.226812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.987 [2024-02-14 19:10:16.226890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.987 [2024-02-14 19:10:16.226940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e8680 00:09:38.987 [2024-02-14 19:10:16.226949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.987 [2024-02-14 19:10:16.227086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.987 [2024-02-14 19:10:16.227094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.987 [2024-02-14 19:10:16.227130] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:38.987 [2024-02-14 19:10:16.227139] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.987 pt2 00:09:38.987 19:10:16 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:39.247 [2024-02-14 19:10:16.438829] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.247 19:10:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.506 19:10:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:39.506 "name": "raid_bdev1", 00:09:39.506 "uuid": "adb361cc-cb6c-11ee-af6b-4feeebbbadda", 00:09:39.506 "strip_size_kb": 64, 00:09:39.506 "state": "configuring", 00:09:39.506 "raid_level": "concat", 00:09:39.506 "superblock": true, 00:09:39.506 "num_base_bdevs": 3, 00:09:39.506 "num_base_bdevs_discovered": 1, 00:09:39.506 "num_base_bdevs_operational": 3, 00:09:39.506 "base_bdevs_list": [ 00:09:39.506 { 00:09:39.506 "name": "pt1", 00:09:39.506 "uuid": "a2e2bd40-9f25-145b-b291-ce2ec8b80518", 00:09:39.506 "is_configured": true, 00:09:39.506 "data_offset": 2048, 00:09:39.506 "data_size": 63488 00:09:39.506 }, 00:09:39.506 { 00:09:39.506 "name": null, 00:09:39.506 "uuid": "4dd87dfd-51f6-a451-bafe-fc30946518b7", 00:09:39.506 "is_configured": false, 00:09:39.506 "data_offset": 2048, 00:09:39.506 "data_size": 63488 00:09:39.506 }, 00:09:39.506 { 00:09:39.506 "name": null, 00:09:39.506 "uuid": "4ab8f0e2-3574-7c50-88c4-dd4f61ffdf20", 00:09:39.506 "is_configured": false, 00:09:39.506 "data_offset": 2048, 00:09:39.506 "data_size": 63488 00:09:39.506 } 00:09:39.506 ] 00:09:39.506 }' 00:09:39.506 19:10:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:39.506 19:10:16 -- common/autotest_common.sh@10 -- # set +x 00:09:39.766 19:10:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:39.766 19:10:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:39.766 19:10:16 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.025 [2024-02-14 19:10:17.226933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.025 [2024-02-14 19:10:17.227012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.025 [2024-02-14 19:10:17.227047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e8680 00:09:40.025 [2024-02-14 19:10:17.227055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.025 [2024-02-14 19:10:17.227208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.025 [2024-02-14 19:10:17.227217] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.025 [2024-02-14 19:10:17.227243] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:40.025 [2024-02-14 19:10:17.227251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.025 pt2 00:09:40.025 19:10:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:40.025 19:10:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:40.025 19:10:17 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:40.025 [2024-02-14 19:10:17.430932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:40.025 [2024-02-14 19:10:17.430995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.025 [2024-02-14 19:10:17.431026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8298e8400 00:09:40.025 [2024-02-14 19:10:17.431033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.025 [2024-02-14 19:10:17.431165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.025 [2024-02-14 19:10:17.431173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:40.025 [2024-02-14 19:10:17.431196] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:40.025 [2024-02-14 19:10:17.431205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:40.025 [2024-02-14 19:10:17.431235] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x8298e7780 00:09:40.025 [2024-02-14 19:10:17.431239] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:40.025 [2024-02-14 19:10:17.431258] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82994ae20 00:09:40.025 [2024-02-14 19:10:17.431307] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8298e7780 00:09:40.025 [2024-02-14 19:10:17.431311] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8298e7780 00:09:40.025 [2024-02-14 19:10:17.431329] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.025 pt3 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.284 19:10:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:40.284 "name": "raid_bdev1", 00:09:40.284 "uuid": "adb361cc-cb6c-11ee-af6b-4feeebbbadda", 00:09:40.284 "strip_size_kb": 64, 00:09:40.284 "state": "online", 00:09:40.284 "raid_level": "concat", 00:09:40.284 "superblock": true, 00:09:40.284 "num_base_bdevs": 3, 00:09:40.284 "num_base_bdevs_discovered": 3, 00:09:40.284 "num_base_bdevs_operational": 3, 00:09:40.284 "base_bdevs_list": [ 00:09:40.284 { 00:09:40.284 "name": "pt1", 00:09:40.284 "uuid": "a2e2bd40-9f25-145b-b291-ce2ec8b80518", 00:09:40.284 "is_configured": true, 00:09:40.284 "data_offset": 2048, 00:09:40.284 "data_size": 63488 00:09:40.284 }, 00:09:40.284 { 00:09:40.284 "name": "pt2", 00:09:40.284 "uuid": "4dd87dfd-51f6-a451-bafe-fc30946518b7", 00:09:40.284 "is_configured": true, 00:09:40.284 "data_offset": 2048, 00:09:40.284 "data_size": 63488 00:09:40.284 }, 00:09:40.284 { 00:09:40.284 "name": "pt3", 00:09:40.284 "uuid": "4ab8f0e2-3574-7c50-88c4-dd4f61ffdf20", 00:09:40.284 "is_configured": true, 00:09:40.284 "data_offset": 2048, 00:09:40.284 "data_size": 63488 00:09:40.285 } 00:09:40.285 ] 00:09:40.285 }' 00:09:40.285 19:10:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:40.285 19:10:17 -- common/autotest_common.sh@10 -- # set +x 00:09:40.851 19:10:17 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:40.852 19:10:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:40.852 [2024-02-14 19:10:18.231004] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.852 19:10:18 -- bdev/bdev_raid.sh@430 -- # '[' adb361cc-cb6c-11ee-af6b-4feeebbbadda '!=' adb361cc-cb6c-11ee-af6b-4feeebbbadda ']' 00:09:40.852 19:10:18 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:09:40.852 19:10:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:40.852 19:10:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:40.852 19:10:18 -- bdev/bdev_raid.sh@511 -- # killprocess 51544 00:09:40.852 19:10:18 -- common/autotest_common.sh@924 -- # '[' -z 51544 ']' 00:09:40.852 19:10:18 -- common/autotest_common.sh@928 -- # kill -0 51544 00:09:40.852 19:10:18 -- common/autotest_common.sh@929 -- # uname 00:09:40.852 19:10:18 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:09:40.852 19:10:18 -- common/autotest_common.sh@932 -- # ps -c -o command 51544 00:09:40.852 19:10:18 -- common/autotest_common.sh@932 -- # tail -1 00:09:40.852 19:10:18 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:09:40.852 killing process with pid 51544 00:09:40.852 19:10:18 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:09:40.852 19:10:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 51544' 00:09:40.852 19:10:18 -- common/autotest_common.sh@943 -- # kill 51544 00:09:40.852 [2024-02-14 19:10:18.261860] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.852 [2024-02-14 19:10:18.261885] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.852 [2024-02-14 19:10:18.261903] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.852 [2024-02-14 19:10:18.261907] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8298e7780 name raid_bdev1, state offline 00:09:40.852 19:10:18 -- common/autotest_common.sh@948 -- # wait 51544 00:09:41.110 [2024-02-14 19:10:18.289922] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:41.369 00:09:41.369 real 0m8.627s 00:09:41.369 user 0m14.330s 00:09:41.369 sys 0m2.098s 00:09:41.369 19:10:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:41.369 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:41.369 ************************************ 00:09:41.369 END TEST raid_superblock_test 00:09:41.369 ************************************ 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:41.369 19:10:18 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:09:41.369 19:10:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:41.369 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:41.369 ************************************ 00:09:41.369 START TEST raid_state_function_test 00:09:41.369 ************************************ 00:09:41.369 19:10:18 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 3 false 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=51725 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:41.369 Process raid pid: 51725 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51725' 00:09:41.369 19:10:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51725 /var/tmp/spdk-raid.sock 00:09:41.369 19:10:18 -- common/autotest_common.sh@817 -- # '[' -z 51725 ']' 00:09:41.369 19:10:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:41.369 19:10:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:41.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:41.369 19:10:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:41.369 19:10:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:41.369 19:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:41.369 [2024-02-14 19:10:18.588678] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:41.369 [2024-02-14 19:10:18.588958] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:41.936 EAL: TSC is not safe to use in SMP mode 00:09:41.936 EAL: TSC is not invariant 00:09:41.936 [2024-02-14 19:10:19.343319] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.194 [2024-02-14 19:10:19.460321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.194 [2024-02-14 19:10:19.460838] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.194 [2024-02-14 19:10:19.460857] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.194 19:10:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:42.194 19:10:19 -- common/autotest_common.sh@850 -- # return 0 00:09:42.194 19:10:19 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:42.453 [2024-02-14 19:10:19.783939] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:42.453 [2024-02-14 19:10:19.784010] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:42.453 [2024-02-14 19:10:19.784015] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:42.453 [2024-02-14 19:10:19.784023] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:42.453 [2024-02-14 19:10:19.784027] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:42.453 [2024-02-14 19:10:19.784034] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.453 19:10:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.711 19:10:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:42.711 "name": "Existed_Raid", 00:09:42.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.711 "strip_size_kb": 0, 00:09:42.711 "state": "configuring", 00:09:42.711 "raid_level": "raid1", 00:09:42.711 "superblock": false, 00:09:42.711 "num_base_bdevs": 3, 00:09:42.711 "num_base_bdevs_discovered": 0, 00:09:42.711 "num_base_bdevs_operational": 3, 00:09:42.711 "base_bdevs_list": [ 00:09:42.711 { 00:09:42.711 "name": "BaseBdev1", 00:09:42.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.711 "is_configured": false, 00:09:42.711 "data_offset": 0, 00:09:42.711 "data_size": 0 00:09:42.711 }, 00:09:42.711 { 00:09:42.711 "name": "BaseBdev2", 00:09:42.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.711 "is_configured": false, 00:09:42.711 "data_offset": 0, 00:09:42.711 "data_size": 0 00:09:42.711 }, 00:09:42.711 { 00:09:42.711 "name": "BaseBdev3", 00:09:42.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.711 "is_configured": false, 00:09:42.711 "data_offset": 0, 00:09:42.711 "data_size": 0 00:09:42.711 } 00:09:42.711 ] 00:09:42.711 }' 00:09:42.711 19:10:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:42.711 19:10:20 -- common/autotest_common.sh@10 -- # set +x 00:09:43.277 19:10:20 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:43.277 [2024-02-14 19:10:20.627968] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.277 [2024-02-14 19:10:20.628001] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c641500 name Existed_Raid, state configuring 00:09:43.277 19:10:20 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:43.534 [2024-02-14 19:10:20.943999] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.534 [2024-02-14 19:10:20.944076] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.534 [2024-02-14 19:10:20.944081] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.534 [2024-02-14 19:10:20.944090] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.534 [2024-02-14 19:10:20.944093] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.534 [2024-02-14 19:10:20.944100] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.792 19:10:20 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.792 [2024-02-14 19:10:21.169402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.792 BaseBdev1 00:09:43.792 19:10:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:43.792 19:10:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:09:43.792 19:10:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:43.792 19:10:21 -- common/autotest_common.sh@887 -- # local i 00:09:43.792 19:10:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:43.792 19:10:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:43.792 19:10:21 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:44.050 19:10:21 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.314 [ 00:09:44.314 { 00:09:44.314 "name": "BaseBdev1", 00:09:44.314 "aliases": [ 00:09:44.314 "b2cbf080-cb6c-11ee-af6b-4feeebbbadda" 00:09:44.314 ], 00:09:44.314 "product_name": "Malloc disk", 00:09:44.314 "block_size": 512, 00:09:44.314 "num_blocks": 65536, 00:09:44.314 "uuid": "b2cbf080-cb6c-11ee-af6b-4feeebbbadda", 00:09:44.314 "assigned_rate_limits": { 00:09:44.314 "rw_ios_per_sec": 0, 00:09:44.314 "rw_mbytes_per_sec": 0, 00:09:44.314 "r_mbytes_per_sec": 0, 00:09:44.314 "w_mbytes_per_sec": 0 00:09:44.314 }, 00:09:44.314 "claimed": true, 00:09:44.314 "claim_type": "exclusive_write", 00:09:44.314 "zoned": false, 00:09:44.314 "supported_io_types": { 00:09:44.314 "read": true, 00:09:44.314 "write": true, 00:09:44.314 "unmap": true, 00:09:44.314 "write_zeroes": true, 00:09:44.314 "flush": true, 00:09:44.314 "reset": true, 00:09:44.314 "compare": false, 00:09:44.314 "compare_and_write": false, 00:09:44.314 "abort": true, 00:09:44.314 "nvme_admin": false, 00:09:44.314 "nvme_io": false 00:09:44.314 }, 00:09:44.314 "memory_domains": [ 00:09:44.314 { 00:09:44.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.314 "dma_device_type": 2 00:09:44.314 } 00:09:44.314 ], 00:09:44.314 "driver_specific": {} 00:09:44.314 } 00:09:44.314 ] 00:09:44.314 19:10:21 -- common/autotest_common.sh@893 -- # return 0 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.314 19:10:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.588 19:10:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:44.588 "name": "Existed_Raid", 00:09:44.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.588 "strip_size_kb": 0, 00:09:44.588 "state": "configuring", 00:09:44.588 "raid_level": "raid1", 00:09:44.588 "superblock": false, 00:09:44.588 "num_base_bdevs": 3, 00:09:44.588 "num_base_bdevs_discovered": 1, 00:09:44.588 "num_base_bdevs_operational": 3, 00:09:44.588 "base_bdevs_list": [ 00:09:44.588 { 00:09:44.589 "name": "BaseBdev1", 00:09:44.589 "uuid": "b2cbf080-cb6c-11ee-af6b-4feeebbbadda", 00:09:44.589 "is_configured": true, 00:09:44.589 "data_offset": 0, 00:09:44.589 "data_size": 65536 00:09:44.589 }, 00:09:44.589 { 00:09:44.589 "name": "BaseBdev2", 00:09:44.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.589 "is_configured": false, 00:09:44.589 "data_offset": 0, 00:09:44.589 "data_size": 0 00:09:44.589 }, 00:09:44.589 { 00:09:44.589 "name": "BaseBdev3", 00:09:44.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.589 "is_configured": false, 00:09:44.589 "data_offset": 0, 00:09:44.589 "data_size": 0 00:09:44.589 } 00:09:44.589 ] 00:09:44.589 }' 00:09:44.589 19:10:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:44.589 19:10:21 -- common/autotest_common.sh@10 -- # set +x 00:09:44.847 19:10:22 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:45.106 [2024-02-14 19:10:22.472101] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.106 [2024-02-14 19:10:22.472144] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c641500 name Existed_Raid, state configuring 00:09:45.106 19:10:22 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:45.106 19:10:22 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:45.365 [2024-02-14 19:10:22.748123] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.365 [2024-02-14 19:10:22.749213] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.365 [2024-02-14 19:10:22.749262] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.365 [2024-02-14 19:10:22.749267] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.365 [2024-02-14 19:10:22.749276] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.365 19:10:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.932 19:10:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:45.932 "name": "Existed_Raid", 00:09:45.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.932 "strip_size_kb": 0, 00:09:45.932 "state": "configuring", 00:09:45.932 "raid_level": "raid1", 00:09:45.932 "superblock": false, 00:09:45.932 "num_base_bdevs": 3, 00:09:45.932 "num_base_bdevs_discovered": 1, 00:09:45.932 "num_base_bdevs_operational": 3, 00:09:45.932 "base_bdevs_list": [ 00:09:45.932 { 00:09:45.932 "name": "BaseBdev1", 00:09:45.932 "uuid": "b2cbf080-cb6c-11ee-af6b-4feeebbbadda", 00:09:45.932 "is_configured": true, 00:09:45.932 "data_offset": 0, 00:09:45.932 "data_size": 65536 00:09:45.932 }, 00:09:45.932 { 00:09:45.932 "name": "BaseBdev2", 00:09:45.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.932 "is_configured": false, 00:09:45.932 "data_offset": 0, 00:09:45.932 "data_size": 0 00:09:45.932 }, 00:09:45.932 { 00:09:45.932 "name": "BaseBdev3", 00:09:45.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.932 "is_configured": false, 00:09:45.932 "data_offset": 0, 00:09:45.932 "data_size": 0 00:09:45.932 } 00:09:45.932 ] 00:09:45.932 }' 00:09:45.932 19:10:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:45.932 19:10:23 -- common/autotest_common.sh@10 -- # set +x 00:09:46.190 19:10:23 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.190 [2024-02-14 19:10:23.608341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.449 BaseBdev2 00:09:46.449 19:10:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:46.449 19:10:23 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:09:46.449 19:10:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:46.449 19:10:23 -- common/autotest_common.sh@887 -- # local i 00:09:46.449 19:10:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:46.449 19:10:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:46.449 19:10:23 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:46.449 19:10:23 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.706 [ 00:09:46.706 { 00:09:46.706 "name": "BaseBdev2", 00:09:46.706 "aliases": [ 00:09:46.706 "b440467d-cb6c-11ee-af6b-4feeebbbadda" 00:09:46.706 ], 00:09:46.706 "product_name": "Malloc disk", 00:09:46.706 "block_size": 512, 00:09:46.706 "num_blocks": 65536, 00:09:46.706 "uuid": "b440467d-cb6c-11ee-af6b-4feeebbbadda", 00:09:46.706 "assigned_rate_limits": { 00:09:46.706 "rw_ios_per_sec": 0, 00:09:46.706 "rw_mbytes_per_sec": 0, 00:09:46.706 "r_mbytes_per_sec": 0, 00:09:46.706 "w_mbytes_per_sec": 0 00:09:46.706 }, 00:09:46.706 "claimed": true, 00:09:46.707 "claim_type": "exclusive_write", 00:09:46.707 "zoned": false, 00:09:46.707 "supported_io_types": { 00:09:46.707 "read": true, 00:09:46.707 "write": true, 00:09:46.707 "unmap": true, 00:09:46.707 "write_zeroes": true, 00:09:46.707 "flush": true, 00:09:46.707 "reset": true, 00:09:46.707 "compare": false, 00:09:46.707 "compare_and_write": false, 00:09:46.707 "abort": true, 00:09:46.707 "nvme_admin": false, 00:09:46.707 "nvme_io": false 00:09:46.707 }, 00:09:46.707 "memory_domains": [ 00:09:46.707 { 00:09:46.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.707 "dma_device_type": 2 00:09:46.707 } 00:09:46.707 ], 00:09:46.707 "driver_specific": {} 00:09:46.707 } 00:09:46.707 ] 00:09:46.965 19:10:24 -- common/autotest_common.sh@893 -- # return 0 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.965 19:10:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.223 19:10:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:47.223 "name": "Existed_Raid", 00:09:47.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.223 "strip_size_kb": 0, 00:09:47.223 "state": "configuring", 00:09:47.223 "raid_level": "raid1", 00:09:47.223 "superblock": false, 00:09:47.223 "num_base_bdevs": 3, 00:09:47.223 "num_base_bdevs_discovered": 2, 00:09:47.223 "num_base_bdevs_operational": 3, 00:09:47.223 "base_bdevs_list": [ 00:09:47.223 { 00:09:47.223 "name": "BaseBdev1", 00:09:47.223 "uuid": "b2cbf080-cb6c-11ee-af6b-4feeebbbadda", 00:09:47.223 "is_configured": true, 00:09:47.223 "data_offset": 0, 00:09:47.223 "data_size": 65536 00:09:47.223 }, 00:09:47.223 { 00:09:47.223 "name": "BaseBdev2", 00:09:47.223 "uuid": "b440467d-cb6c-11ee-af6b-4feeebbbadda", 00:09:47.223 "is_configured": true, 00:09:47.223 "data_offset": 0, 00:09:47.223 "data_size": 65536 00:09:47.223 }, 00:09:47.223 { 00:09:47.223 "name": "BaseBdev3", 00:09:47.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.223 "is_configured": false, 00:09:47.223 "data_offset": 0, 00:09:47.223 "data_size": 0 00:09:47.223 } 00:09:47.223 ] 00:09:47.223 }' 00:09:47.223 19:10:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:47.223 19:10:24 -- common/autotest_common.sh@10 -- # set +x 00:09:47.483 19:10:24 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.741 [2024-02-14 19:10:24.996316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.741 [2024-02-14 19:10:24.996346] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c641a00 00:09:47.741 [2024-02-14 19:10:24.996351] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:47.741 [2024-02-14 19:10:24.996376] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c6a4ec0 00:09:47.741 [2024-02-14 19:10:24.996464] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c641a00 00:09:47.741 [2024-02-14 19:10:24.996469] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c641a00 00:09:47.741 [2024-02-14 19:10:24.996498] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.741 BaseBdev3 00:09:47.741 19:10:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:47.741 19:10:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:09:47.741 19:10:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:47.741 19:10:25 -- common/autotest_common.sh@887 -- # local i 00:09:47.741 19:10:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:47.741 19:10:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:47.741 19:10:25 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.000 19:10:25 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.259 [ 00:09:48.259 { 00:09:48.259 "name": "BaseBdev3", 00:09:48.259 "aliases": [ 00:09:48.259 "b514129f-cb6c-11ee-af6b-4feeebbbadda" 00:09:48.259 ], 00:09:48.259 "product_name": "Malloc disk", 00:09:48.259 "block_size": 512, 00:09:48.259 "num_blocks": 65536, 00:09:48.259 "uuid": "b514129f-cb6c-11ee-af6b-4feeebbbadda", 00:09:48.259 "assigned_rate_limits": { 00:09:48.259 "rw_ios_per_sec": 0, 00:09:48.259 "rw_mbytes_per_sec": 0, 00:09:48.259 "r_mbytes_per_sec": 0, 00:09:48.259 "w_mbytes_per_sec": 0 00:09:48.259 }, 00:09:48.259 "claimed": true, 00:09:48.259 "claim_type": "exclusive_write", 00:09:48.259 "zoned": false, 00:09:48.259 "supported_io_types": { 00:09:48.259 "read": true, 00:09:48.259 "write": true, 00:09:48.259 "unmap": true, 00:09:48.259 "write_zeroes": true, 00:09:48.259 "flush": true, 00:09:48.259 "reset": true, 00:09:48.259 "compare": false, 00:09:48.259 "compare_and_write": false, 00:09:48.259 "abort": true, 00:09:48.259 "nvme_admin": false, 00:09:48.259 "nvme_io": false 00:09:48.259 }, 00:09:48.259 "memory_domains": [ 00:09:48.259 { 00:09:48.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.259 "dma_device_type": 2 00:09:48.259 } 00:09:48.259 ], 00:09:48.259 "driver_specific": {} 00:09:48.259 } 00:09:48.259 ] 00:09:48.259 19:10:25 -- common/autotest_common.sh@893 -- # return 0 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.259 19:10:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.518 19:10:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:48.518 "name": "Existed_Raid", 00:09:48.518 "uuid": "b5141826-cb6c-11ee-af6b-4feeebbbadda", 00:09:48.518 "strip_size_kb": 0, 00:09:48.518 "state": "online", 00:09:48.518 "raid_level": "raid1", 00:09:48.518 "superblock": false, 00:09:48.518 "num_base_bdevs": 3, 00:09:48.518 "num_base_bdevs_discovered": 3, 00:09:48.518 "num_base_bdevs_operational": 3, 00:09:48.518 "base_bdevs_list": [ 00:09:48.518 { 00:09:48.518 "name": "BaseBdev1", 00:09:48.518 "uuid": "b2cbf080-cb6c-11ee-af6b-4feeebbbadda", 00:09:48.518 "is_configured": true, 00:09:48.518 "data_offset": 0, 00:09:48.518 "data_size": 65536 00:09:48.518 }, 00:09:48.518 { 00:09:48.518 "name": "BaseBdev2", 00:09:48.518 "uuid": "b440467d-cb6c-11ee-af6b-4feeebbbadda", 00:09:48.518 "is_configured": true, 00:09:48.518 "data_offset": 0, 00:09:48.518 "data_size": 65536 00:09:48.518 }, 00:09:48.518 { 00:09:48.518 "name": "BaseBdev3", 00:09:48.518 "uuid": "b514129f-cb6c-11ee-af6b-4feeebbbadda", 00:09:48.518 "is_configured": true, 00:09:48.518 "data_offset": 0, 00:09:48.518 "data_size": 65536 00:09:48.518 } 00:09:48.518 ] 00:09:48.518 }' 00:09:48.518 19:10:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:48.518 19:10:25 -- common/autotest_common.sh@10 -- # set +x 00:09:48.777 19:10:26 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:49.035 [2024-02-14 19:10:26.348250] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.035 19:10:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.603 19:10:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:49.603 "name": "Existed_Raid", 00:09:49.603 "uuid": "b5141826-cb6c-11ee-af6b-4feeebbbadda", 00:09:49.603 "strip_size_kb": 0, 00:09:49.603 "state": "online", 00:09:49.603 "raid_level": "raid1", 00:09:49.603 "superblock": false, 00:09:49.603 "num_base_bdevs": 3, 00:09:49.603 "num_base_bdevs_discovered": 2, 00:09:49.603 "num_base_bdevs_operational": 2, 00:09:49.603 "base_bdevs_list": [ 00:09:49.603 { 00:09:49.603 "name": null, 00:09:49.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.603 "is_configured": false, 00:09:49.603 "data_offset": 0, 00:09:49.603 "data_size": 65536 00:09:49.603 }, 00:09:49.603 { 00:09:49.603 "name": "BaseBdev2", 00:09:49.603 "uuid": "b440467d-cb6c-11ee-af6b-4feeebbbadda", 00:09:49.603 "is_configured": true, 00:09:49.603 "data_offset": 0, 00:09:49.603 "data_size": 65536 00:09:49.603 }, 00:09:49.603 { 00:09:49.603 "name": "BaseBdev3", 00:09:49.603 "uuid": "b514129f-cb6c-11ee-af6b-4feeebbbadda", 00:09:49.603 "is_configured": true, 00:09:49.603 "data_offset": 0, 00:09:49.603 "data_size": 65536 00:09:49.603 } 00:09:49.603 ] 00:09:49.603 }' 00:09:49.603 19:10:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:49.603 19:10:26 -- common/autotest_common.sh@10 -- # set +x 00:09:49.862 19:10:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:49.862 19:10:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:49.862 19:10:27 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.862 19:10:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:50.121 19:10:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:50.121 19:10:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.121 19:10:27 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:50.380 [2024-02-14 19:10:27.597148] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.380 19:10:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:50.380 19:10:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:50.380 19:10:27 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.380 19:10:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:50.639 19:10:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:50.639 19:10:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.639 19:10:27 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:50.639 [2024-02-14 19:10:28.049827] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.639 [2024-02-14 19:10:28.049849] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.639 [2024-02-14 19:10:28.049860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.639 [2024-02-14 19:10:28.054500] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.639 [2024-02-14 19:10:28.054516] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c641a00 name Existed_Raid, state offline 00:09:50.902 19:10:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:50.902 19:10:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:50.902 19:10:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.902 19:10:28 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@287 -- # killprocess 51725 00:09:51.180 19:10:28 -- common/autotest_common.sh@924 -- # '[' -z 51725 ']' 00:09:51.180 19:10:28 -- common/autotest_common.sh@928 -- # kill -0 51725 00:09:51.180 19:10:28 -- common/autotest_common.sh@929 -- # uname 00:09:51.180 19:10:28 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:09:51.180 19:10:28 -- common/autotest_common.sh@932 -- # ps -c -o command 51725 00:09:51.180 19:10:28 -- common/autotest_common.sh@932 -- # tail -1 00:09:51.180 19:10:28 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:09:51.180 19:10:28 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:09:51.180 killing process with pid 51725 00:09:51.180 19:10:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 51725' 00:09:51.180 19:10:28 -- common/autotest_common.sh@943 -- # kill 51725 00:09:51.180 [2024-02-14 19:10:28.354850] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.180 [2024-02-14 19:10:28.354894] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.180 19:10:28 -- common/autotest_common.sh@948 -- # wait 51725 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:51.180 00:09:51.180 real 0m9.926s 00:09:51.180 user 0m17.012s 00:09:51.180 sys 0m2.122s 00:09:51.180 19:10:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.180 ************************************ 00:09:51.180 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:09:51.180 END TEST raid_state_function_test 00:09:51.180 ************************************ 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:51.180 19:10:28 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:09:51.180 19:10:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:51.180 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:09:51.180 ************************************ 00:09:51.180 START TEST raid_state_function_test_sb 00:09:51.180 ************************************ 00:09:51.180 19:10:28 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 3 true 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:51.180 19:10:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=51958 00:09:51.181 Process raid pid: 51958 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51958' 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51958 /var/tmp/spdk-raid.sock 00:09:51.181 19:10:28 -- common/autotest_common.sh@817 -- # '[' -z 51958 ']' 00:09:51.181 19:10:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:51.181 19:10:28 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:51.181 19:10:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:51.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:51.181 19:10:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:51.181 19:10:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:51.181 19:10:28 -- common/autotest_common.sh@10 -- # set +x 00:09:51.181 [2024-02-14 19:10:28.566634] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:09:51.181 [2024-02-14 19:10:28.566953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:52.119 EAL: TSC is not safe to use in SMP mode 00:09:52.119 EAL: TSC is not invariant 00:09:52.119 [2024-02-14 19:10:29.333179] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.119 [2024-02-14 19:10:29.415109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.119 [2024-02-14 19:10:29.415524] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.119 [2024-02-14 19:10:29.415535] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.119 19:10:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:52.119 19:10:29 -- common/autotest_common.sh@850 -- # return 0 00:09:52.119 19:10:29 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:52.378 [2024-02-14 19:10:29.734078] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:52.378 [2024-02-14 19:10:29.734138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:52.378 [2024-02-14 19:10:29.734143] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:52.378 [2024-02-14 19:10:29.734152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:52.378 [2024-02-14 19:10:29.734155] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:52.378 [2024-02-14 19:10:29.734162] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.378 19:10:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.637 19:10:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:52.637 "name": "Existed_Raid", 00:09:52.637 "uuid": "b7e7031f-cb6c-11ee-af6b-4feeebbbadda", 00:09:52.637 "strip_size_kb": 0, 00:09:52.637 "state": "configuring", 00:09:52.637 "raid_level": "raid1", 00:09:52.637 "superblock": true, 00:09:52.637 "num_base_bdevs": 3, 00:09:52.637 "num_base_bdevs_discovered": 0, 00:09:52.637 "num_base_bdevs_operational": 3, 00:09:52.637 "base_bdevs_list": [ 00:09:52.637 { 00:09:52.637 "name": "BaseBdev1", 00:09:52.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.637 "is_configured": false, 00:09:52.637 "data_offset": 0, 00:09:52.637 "data_size": 0 00:09:52.637 }, 00:09:52.637 { 00:09:52.637 "name": "BaseBdev2", 00:09:52.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.637 "is_configured": false, 00:09:52.637 "data_offset": 0, 00:09:52.637 "data_size": 0 00:09:52.637 }, 00:09:52.637 { 00:09:52.637 "name": "BaseBdev3", 00:09:52.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.637 "is_configured": false, 00:09:52.637 "data_offset": 0, 00:09:52.637 "data_size": 0 00:09:52.637 } 00:09:52.637 ] 00:09:52.637 }' 00:09:52.637 19:10:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:52.637 19:10:30 -- common/autotest_common.sh@10 -- # set +x 00:09:53.205 19:10:30 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:53.205 [2024-02-14 19:10:30.574082] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.205 [2024-02-14 19:10:30.574128] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b0cc500 name Existed_Raid, state configuring 00:09:53.205 19:10:30 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:53.463 [2024-02-14 19:10:30.818098] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.463 [2024-02-14 19:10:30.818149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.463 [2024-02-14 19:10:30.818154] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.463 [2024-02-14 19:10:30.818162] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.463 [2024-02-14 19:10:30.818166] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.463 [2024-02-14 19:10:30.818173] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.463 19:10:30 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.721 [2024-02-14 19:10:31.083026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.721 BaseBdev1 00:09:53.721 19:10:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:53.721 19:10:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:09:53.721 19:10:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:53.721 19:10:31 -- common/autotest_common.sh@887 -- # local i 00:09:53.721 19:10:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:53.721 19:10:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:53.721 19:10:31 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:53.980 19:10:31 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.240 [ 00:09:54.240 { 00:09:54.240 "name": "BaseBdev1", 00:09:54.240 "aliases": [ 00:09:54.240 "b8b4b409-cb6c-11ee-af6b-4feeebbbadda" 00:09:54.240 ], 00:09:54.240 "product_name": "Malloc disk", 00:09:54.240 "block_size": 512, 00:09:54.240 "num_blocks": 65536, 00:09:54.240 "uuid": "b8b4b409-cb6c-11ee-af6b-4feeebbbadda", 00:09:54.240 "assigned_rate_limits": { 00:09:54.240 "rw_ios_per_sec": 0, 00:09:54.240 "rw_mbytes_per_sec": 0, 00:09:54.240 "r_mbytes_per_sec": 0, 00:09:54.240 "w_mbytes_per_sec": 0 00:09:54.240 }, 00:09:54.240 "claimed": true, 00:09:54.240 "claim_type": "exclusive_write", 00:09:54.240 "zoned": false, 00:09:54.240 "supported_io_types": { 00:09:54.240 "read": true, 00:09:54.240 "write": true, 00:09:54.240 "unmap": true, 00:09:54.240 "write_zeroes": true, 00:09:54.240 "flush": true, 00:09:54.240 "reset": true, 00:09:54.240 "compare": false, 00:09:54.240 "compare_and_write": false, 00:09:54.240 "abort": true, 00:09:54.240 "nvme_admin": false, 00:09:54.240 "nvme_io": false 00:09:54.240 }, 00:09:54.240 "memory_domains": [ 00:09:54.240 { 00:09:54.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.240 "dma_device_type": 2 00:09:54.240 } 00:09:54.240 ], 00:09:54.240 "driver_specific": {} 00:09:54.240 } 00:09:54.240 ] 00:09:54.240 19:10:31 -- common/autotest_common.sh@893 -- # return 0 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.240 19:10:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.499 19:10:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:54.499 "name": "Existed_Raid", 00:09:54.500 "uuid": "b88c6bbb-cb6c-11ee-af6b-4feeebbbadda", 00:09:54.500 "strip_size_kb": 0, 00:09:54.500 "state": "configuring", 00:09:54.500 "raid_level": "raid1", 00:09:54.500 "superblock": true, 00:09:54.500 "num_base_bdevs": 3, 00:09:54.500 "num_base_bdevs_discovered": 1, 00:09:54.500 "num_base_bdevs_operational": 3, 00:09:54.500 "base_bdevs_list": [ 00:09:54.500 { 00:09:54.500 "name": "BaseBdev1", 00:09:54.500 "uuid": "b8b4b409-cb6c-11ee-af6b-4feeebbbadda", 00:09:54.500 "is_configured": true, 00:09:54.500 "data_offset": 2048, 00:09:54.500 "data_size": 63488 00:09:54.500 }, 00:09:54.500 { 00:09:54.500 "name": "BaseBdev2", 00:09:54.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.500 "is_configured": false, 00:09:54.500 "data_offset": 0, 00:09:54.500 "data_size": 0 00:09:54.500 }, 00:09:54.500 { 00:09:54.500 "name": "BaseBdev3", 00:09:54.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.500 "is_configured": false, 00:09:54.500 "data_offset": 0, 00:09:54.500 "data_size": 0 00:09:54.500 } 00:09:54.500 ] 00:09:54.500 }' 00:09:54.500 19:10:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:54.500 19:10:31 -- common/autotest_common.sh@10 -- # set +x 00:09:55.067 19:10:32 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:55.067 [2024-02-14 19:10:32.426143] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.067 [2024-02-14 19:10:32.426182] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b0cc500 name Existed_Raid, state configuring 00:09:55.067 19:10:32 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:55.067 19:10:32 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:55.326 19:10:32 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.584 BaseBdev1 00:09:55.584 19:10:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:55.584 19:10:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:09:55.584 19:10:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:55.584 19:10:32 -- common/autotest_common.sh@887 -- # local i 00:09:55.584 19:10:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:55.584 19:10:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:55.584 19:10:32 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:55.843 19:10:33 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.103 [ 00:09:56.103 { 00:09:56.103 "name": "BaseBdev1", 00:09:56.103 "aliases": [ 00:09:56.103 "b9d0a460-cb6c-11ee-af6b-4feeebbbadda" 00:09:56.103 ], 00:09:56.103 "product_name": "Malloc disk", 00:09:56.103 "block_size": 512, 00:09:56.103 "num_blocks": 65536, 00:09:56.103 "uuid": "b9d0a460-cb6c-11ee-af6b-4feeebbbadda", 00:09:56.103 "assigned_rate_limits": { 00:09:56.103 "rw_ios_per_sec": 0, 00:09:56.103 "rw_mbytes_per_sec": 0, 00:09:56.103 "r_mbytes_per_sec": 0, 00:09:56.103 "w_mbytes_per_sec": 0 00:09:56.103 }, 00:09:56.103 "claimed": false, 00:09:56.103 "zoned": false, 00:09:56.103 "supported_io_types": { 00:09:56.103 "read": true, 00:09:56.103 "write": true, 00:09:56.103 "unmap": true, 00:09:56.103 "write_zeroes": true, 00:09:56.103 "flush": true, 00:09:56.103 "reset": true, 00:09:56.103 "compare": false, 00:09:56.103 "compare_and_write": false, 00:09:56.103 "abort": true, 00:09:56.103 "nvme_admin": false, 00:09:56.103 "nvme_io": false 00:09:56.103 }, 00:09:56.103 "memory_domains": [ 00:09:56.103 { 00:09:56.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.103 "dma_device_type": 2 00:09:56.103 } 00:09:56.103 ], 00:09:56.103 "driver_specific": {} 00:09:56.103 } 00:09:56.103 ] 00:09:56.103 19:10:33 -- common/autotest_common.sh@893 -- # return 0 00:09:56.103 19:10:33 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:56.363 [2024-02-14 19:10:33.670941] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.363 [2024-02-14 19:10:33.671292] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.363 [2024-02-14 19:10:33.671337] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.363 [2024-02-14 19:10:33.671342] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.363 [2024-02-14 19:10:33.671363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.363 19:10:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.630 19:10:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:56.630 "name": "Existed_Raid", 00:09:56.630 "uuid": "ba3fbacb-cb6c-11ee-af6b-4feeebbbadda", 00:09:56.630 "strip_size_kb": 0, 00:09:56.630 "state": "configuring", 00:09:56.630 "raid_level": "raid1", 00:09:56.630 "superblock": true, 00:09:56.630 "num_base_bdevs": 3, 00:09:56.630 "num_base_bdevs_discovered": 1, 00:09:56.630 "num_base_bdevs_operational": 3, 00:09:56.630 "base_bdevs_list": [ 00:09:56.630 { 00:09:56.630 "name": "BaseBdev1", 00:09:56.630 "uuid": "b9d0a460-cb6c-11ee-af6b-4feeebbbadda", 00:09:56.630 "is_configured": true, 00:09:56.630 "data_offset": 2048, 00:09:56.630 "data_size": 63488 00:09:56.630 }, 00:09:56.630 { 00:09:56.630 "name": "BaseBdev2", 00:09:56.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.630 "is_configured": false, 00:09:56.630 "data_offset": 0, 00:09:56.630 "data_size": 0 00:09:56.630 }, 00:09:56.630 { 00:09:56.630 "name": "BaseBdev3", 00:09:56.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.630 "is_configured": false, 00:09:56.630 "data_offset": 0, 00:09:56.630 "data_size": 0 00:09:56.630 } 00:09:56.630 ] 00:09:56.630 }' 00:09:56.630 19:10:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:56.630 19:10:33 -- common/autotest_common.sh@10 -- # set +x 00:09:56.891 19:10:34 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.149 [2024-02-14 19:10:34.511053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.149 BaseBdev2 00:09:57.149 19:10:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:57.149 19:10:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:09:57.149 19:10:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:57.149 19:10:34 -- common/autotest_common.sh@887 -- # local i 00:09:57.149 19:10:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:57.149 19:10:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:57.149 19:10:34 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:57.407 19:10:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.666 [ 00:09:57.666 { 00:09:57.666 "name": "BaseBdev2", 00:09:57.666 "aliases": [ 00:09:57.666 "babfe823-cb6c-11ee-af6b-4feeebbbadda" 00:09:57.666 ], 00:09:57.666 "product_name": "Malloc disk", 00:09:57.666 "block_size": 512, 00:09:57.666 "num_blocks": 65536, 00:09:57.666 "uuid": "babfe823-cb6c-11ee-af6b-4feeebbbadda", 00:09:57.666 "assigned_rate_limits": { 00:09:57.666 "rw_ios_per_sec": 0, 00:09:57.666 "rw_mbytes_per_sec": 0, 00:09:57.666 "r_mbytes_per_sec": 0, 00:09:57.666 "w_mbytes_per_sec": 0 00:09:57.666 }, 00:09:57.666 "claimed": true, 00:09:57.666 "claim_type": "exclusive_write", 00:09:57.666 "zoned": false, 00:09:57.666 "supported_io_types": { 00:09:57.666 "read": true, 00:09:57.666 "write": true, 00:09:57.666 "unmap": true, 00:09:57.666 "write_zeroes": true, 00:09:57.666 "flush": true, 00:09:57.666 "reset": true, 00:09:57.666 "compare": false, 00:09:57.666 "compare_and_write": false, 00:09:57.666 "abort": true, 00:09:57.666 "nvme_admin": false, 00:09:57.666 "nvme_io": false 00:09:57.666 }, 00:09:57.666 "memory_domains": [ 00:09:57.666 { 00:09:57.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.666 "dma_device_type": 2 00:09:57.666 } 00:09:57.666 ], 00:09:57.666 "driver_specific": {} 00:09:57.666 } 00:09:57.666 ] 00:09:57.925 19:10:35 -- common/autotest_common.sh@893 -- # return 0 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:57.925 "name": "Existed_Raid", 00:09:57.925 "uuid": "ba3fbacb-cb6c-11ee-af6b-4feeebbbadda", 00:09:57.925 "strip_size_kb": 0, 00:09:57.925 "state": "configuring", 00:09:57.925 "raid_level": "raid1", 00:09:57.925 "superblock": true, 00:09:57.925 "num_base_bdevs": 3, 00:09:57.925 "num_base_bdevs_discovered": 2, 00:09:57.925 "num_base_bdevs_operational": 3, 00:09:57.925 "base_bdevs_list": [ 00:09:57.925 { 00:09:57.925 "name": "BaseBdev1", 00:09:57.925 "uuid": "b9d0a460-cb6c-11ee-af6b-4feeebbbadda", 00:09:57.925 "is_configured": true, 00:09:57.925 "data_offset": 2048, 00:09:57.925 "data_size": 63488 00:09:57.925 }, 00:09:57.925 { 00:09:57.925 "name": "BaseBdev2", 00:09:57.925 "uuid": "babfe823-cb6c-11ee-af6b-4feeebbbadda", 00:09:57.925 "is_configured": true, 00:09:57.925 "data_offset": 2048, 00:09:57.925 "data_size": 63488 00:09:57.925 }, 00:09:57.925 { 00:09:57.925 "name": "BaseBdev3", 00:09:57.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.925 "is_configured": false, 00:09:57.925 "data_offset": 0, 00:09:57.925 "data_size": 0 00:09:57.925 } 00:09:57.925 ] 00:09:57.925 }' 00:09:57.925 19:10:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:57.925 19:10:35 -- common/autotest_common.sh@10 -- # set +x 00:09:58.491 19:10:35 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.751 [2024-02-14 19:10:35.911129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.751 [2024-02-14 19:10:35.911193] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b0cca00 00:09:58.751 [2024-02-14 19:10:35.911198] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.751 [2024-02-14 19:10:35.911214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b12fec0 00:09:58.751 [2024-02-14 19:10:35.911254] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b0cca00 00:09:58.751 [2024-02-14 19:10:35.911258] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b0cca00 00:09:58.751 [2024-02-14 19:10:35.911274] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.751 BaseBdev3 00:09:58.751 19:10:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:58.751 19:10:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:09:58.751 19:10:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:58.751 19:10:35 -- common/autotest_common.sh@887 -- # local i 00:09:58.751 19:10:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:58.751 19:10:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:58.751 19:10:35 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:58.751 19:10:36 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:59.010 [ 00:09:59.010 { 00:09:59.010 "name": "BaseBdev3", 00:09:59.010 "aliases": [ 00:09:59.010 "bb958b08-cb6c-11ee-af6b-4feeebbbadda" 00:09:59.010 ], 00:09:59.010 "product_name": "Malloc disk", 00:09:59.010 "block_size": 512, 00:09:59.010 "num_blocks": 65536, 00:09:59.010 "uuid": "bb958b08-cb6c-11ee-af6b-4feeebbbadda", 00:09:59.010 "assigned_rate_limits": { 00:09:59.010 "rw_ios_per_sec": 0, 00:09:59.010 "rw_mbytes_per_sec": 0, 00:09:59.010 "r_mbytes_per_sec": 0, 00:09:59.010 "w_mbytes_per_sec": 0 00:09:59.010 }, 00:09:59.010 "claimed": true, 00:09:59.010 "claim_type": "exclusive_write", 00:09:59.010 "zoned": false, 00:09:59.010 "supported_io_types": { 00:09:59.010 "read": true, 00:09:59.010 "write": true, 00:09:59.010 "unmap": true, 00:09:59.010 "write_zeroes": true, 00:09:59.010 "flush": true, 00:09:59.010 "reset": true, 00:09:59.010 "compare": false, 00:09:59.010 "compare_and_write": false, 00:09:59.010 "abort": true, 00:09:59.010 "nvme_admin": false, 00:09:59.010 "nvme_io": false 00:09:59.010 }, 00:09:59.010 "memory_domains": [ 00:09:59.010 { 00:09:59.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.010 "dma_device_type": 2 00:09:59.010 } 00:09:59.010 ], 00:09:59.010 "driver_specific": {} 00:09:59.010 } 00:09:59.010 ] 00:09:59.010 19:10:36 -- common/autotest_common.sh@893 -- # return 0 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.010 19:10:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.269 19:10:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:59.269 "name": "Existed_Raid", 00:09:59.269 "uuid": "ba3fbacb-cb6c-11ee-af6b-4feeebbbadda", 00:09:59.269 "strip_size_kb": 0, 00:09:59.269 "state": "online", 00:09:59.269 "raid_level": "raid1", 00:09:59.269 "superblock": true, 00:09:59.269 "num_base_bdevs": 3, 00:09:59.269 "num_base_bdevs_discovered": 3, 00:09:59.269 "num_base_bdevs_operational": 3, 00:09:59.269 "base_bdevs_list": [ 00:09:59.269 { 00:09:59.269 "name": "BaseBdev1", 00:09:59.269 "uuid": "b9d0a460-cb6c-11ee-af6b-4feeebbbadda", 00:09:59.269 "is_configured": true, 00:09:59.269 "data_offset": 2048, 00:09:59.269 "data_size": 63488 00:09:59.269 }, 00:09:59.269 { 00:09:59.269 "name": "BaseBdev2", 00:09:59.269 "uuid": "babfe823-cb6c-11ee-af6b-4feeebbbadda", 00:09:59.269 "is_configured": true, 00:09:59.269 "data_offset": 2048, 00:09:59.269 "data_size": 63488 00:09:59.269 }, 00:09:59.269 { 00:09:59.269 "name": "BaseBdev3", 00:09:59.269 "uuid": "bb958b08-cb6c-11ee-af6b-4feeebbbadda", 00:09:59.269 "is_configured": true, 00:09:59.269 "data_offset": 2048, 00:09:59.269 "data_size": 63488 00:09:59.269 } 00:09:59.269 ] 00:09:59.269 }' 00:09:59.269 19:10:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:59.269 19:10:36 -- common/autotest_common.sh@10 -- # set +x 00:09:59.527 19:10:36 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:59.786 [2024-02-14 19:10:37.199052] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:00.045 "name": "Existed_Raid", 00:10:00.045 "uuid": "ba3fbacb-cb6c-11ee-af6b-4feeebbbadda", 00:10:00.045 "strip_size_kb": 0, 00:10:00.045 "state": "online", 00:10:00.045 "raid_level": "raid1", 00:10:00.045 "superblock": true, 00:10:00.045 "num_base_bdevs": 3, 00:10:00.045 "num_base_bdevs_discovered": 2, 00:10:00.045 "num_base_bdevs_operational": 2, 00:10:00.045 "base_bdevs_list": [ 00:10:00.045 { 00:10:00.045 "name": null, 00:10:00.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.045 "is_configured": false, 00:10:00.045 "data_offset": 2048, 00:10:00.045 "data_size": 63488 00:10:00.045 }, 00:10:00.045 { 00:10:00.045 "name": "BaseBdev2", 00:10:00.045 "uuid": "babfe823-cb6c-11ee-af6b-4feeebbbadda", 00:10:00.045 "is_configured": true, 00:10:00.045 "data_offset": 2048, 00:10:00.045 "data_size": 63488 00:10:00.045 }, 00:10:00.045 { 00:10:00.045 "name": "BaseBdev3", 00:10:00.045 "uuid": "bb958b08-cb6c-11ee-af6b-4feeebbbadda", 00:10:00.045 "is_configured": true, 00:10:00.045 "data_offset": 2048, 00:10:00.045 "data_size": 63488 00:10:00.045 } 00:10:00.045 ] 00:10:00.045 }' 00:10:00.045 19:10:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:00.045 19:10:37 -- common/autotest_common.sh@10 -- # set +x 00:10:00.611 19:10:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:00.611 19:10:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:00.611 19:10:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.611 19:10:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:00.611 19:10:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:00.611 19:10:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.611 19:10:38 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:01.179 [2024-02-14 19:10:38.299778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.179 19:10:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:01.179 19:10:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:01.179 19:10:38 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.179 19:10:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:01.437 19:10:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:01.437 19:10:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.437 19:10:38 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:01.437 [2024-02-14 19:10:38.840456] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.437 [2024-02-14 19:10:38.840477] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.437 [2024-02-14 19:10:38.840486] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.437 [2024-02-14 19:10:38.845127] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.438 [2024-02-14 19:10:38.845143] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b0cca00 name Existed_Raid, state offline 00:10:01.696 19:10:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:01.696 19:10:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:01.696 19:10:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.696 19:10:38 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@287 -- # killprocess 51958 00:10:01.955 19:10:39 -- common/autotest_common.sh@924 -- # '[' -z 51958 ']' 00:10:01.955 19:10:39 -- common/autotest_common.sh@928 -- # kill -0 51958 00:10:01.955 19:10:39 -- common/autotest_common.sh@929 -- # uname 00:10:01.955 19:10:39 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:10:01.955 19:10:39 -- common/autotest_common.sh@932 -- # tail -1 00:10:01.955 19:10:39 -- common/autotest_common.sh@932 -- # ps -c -o command 51958 00:10:01.955 19:10:39 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:10:01.955 19:10:39 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:10:01.955 killing process with pid 51958 00:10:01.955 19:10:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 51958' 00:10:01.955 19:10:39 -- common/autotest_common.sh@943 -- # kill 51958 00:10:01.955 [2024-02-14 19:10:39.155099] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.955 [2024-02-14 19:10:39.155148] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.955 19:10:39 -- common/autotest_common.sh@948 -- # wait 51958 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:01.955 00:10:01.955 real 0m10.749s 00:10:01.955 user 0m18.480s 00:10:01.955 sys 0m2.272s 00:10:01.955 19:10:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:01.955 ************************************ 00:10:01.955 END TEST raid_state_function_test_sb 00:10:01.955 ************************************ 00:10:01.955 19:10:39 -- common/autotest_common.sh@10 -- # set +x 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:01.955 19:10:39 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:10:01.955 19:10:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:01.955 19:10:39 -- common/autotest_common.sh@10 -- # set +x 00:10:01.955 ************************************ 00:10:01.955 START TEST raid_superblock_test 00:10:01.955 ************************************ 00:10:01.955 19:10:39 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid1 3 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=52194 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 52194 /var/tmp/spdk-raid.sock 00:10:01.955 19:10:39 -- common/autotest_common.sh@817 -- # '[' -z 52194 ']' 00:10:01.955 19:10:39 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:01.955 19:10:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:01.955 19:10:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:01.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:01.955 19:10:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:01.955 19:10:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:01.955 19:10:39 -- common/autotest_common.sh@10 -- # set +x 00:10:01.955 [2024-02-14 19:10:39.358151] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:01.955 [2024-02-14 19:10:39.358334] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:02.929 EAL: TSC is not safe to use in SMP mode 00:10:02.929 EAL: TSC is not invariant 00:10:02.929 [2024-02-14 19:10:40.133898] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.929 [2024-02-14 19:10:40.216387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.929 [2024-02-14 19:10:40.216845] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.929 [2024-02-14 19:10:40.216857] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.186 19:10:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:03.186 19:10:40 -- common/autotest_common.sh@850 -- # return 0 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.186 19:10:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:03.444 malloc1 00:10:03.444 19:10:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:03.444 [2024-02-14 19:10:40.831607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:03.444 [2024-02-14 19:10:40.831657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:03.444 [2024-02-14 19:10:40.832152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d928780 00:10:03.444 [2024-02-14 19:10:40.832173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:03.444 [2024-02-14 19:10:40.832853] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:03.444 [2024-02-14 19:10:40.832882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:03.444 pt1 00:10:03.444 19:10:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:03.444 19:10:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:03.444 19:10:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:03.445 19:10:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:03.445 19:10:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:03.445 19:10:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:03.445 19:10:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:03.445 19:10:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:03.445 19:10:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:03.703 malloc2 00:10:03.962 19:10:41 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.221 [2024-02-14 19:10:41.387634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.221 [2024-02-14 19:10:41.387679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.221 [2024-02-14 19:10:41.387703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d928c80 00:10:04.221 [2024-02-14 19:10:41.387711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.221 [2024-02-14 19:10:41.388029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.221 [2024-02-14 19:10:41.388054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.221 pt2 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:04.221 malloc3 00:10:04.221 19:10:41 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.480 [2024-02-14 19:10:41.843668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.480 [2024-02-14 19:10:41.843707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.480 [2024-02-14 19:10:41.843731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929180 00:10:04.480 [2024-02-14 19:10:41.843755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.480 [2024-02-14 19:10:41.844050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.480 [2024-02-14 19:10:41.844073] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.480 pt3 00:10:04.480 19:10:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:04.480 19:10:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:04.480 19:10:41 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:04.739 [2024-02-14 19:10:42.079694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.739 [2024-02-14 19:10:42.080042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.739 [2024-02-14 19:10:42.080054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.739 [2024-02-14 19:10:42.080099] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d929400 00:10:04.739 [2024-02-14 19:10:42.080103] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.739 [2024-02-14 19:10:42.080127] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d98be20 00:10:04.739 [2024-02-14 19:10:42.080175] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d929400 00:10:04.739 [2024-02-14 19:10:42.080179] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d929400 00:10:04.739 [2024-02-14 19:10:42.080197] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.739 19:10:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.998 19:10:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:04.999 "name": "raid_bdev1", 00:10:04.999 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:04.999 "strip_size_kb": 0, 00:10:04.999 "state": "online", 00:10:04.999 "raid_level": "raid1", 00:10:04.999 "superblock": true, 00:10:04.999 "num_base_bdevs": 3, 00:10:04.999 "num_base_bdevs_discovered": 3, 00:10:04.999 "num_base_bdevs_operational": 3, 00:10:04.999 "base_bdevs_list": [ 00:10:04.999 { 00:10:04.999 "name": "pt1", 00:10:04.999 "uuid": "429d30a8-4498-fb57-aea6-7533025e153a", 00:10:04.999 "is_configured": true, 00:10:04.999 "data_offset": 2048, 00:10:04.999 "data_size": 63488 00:10:04.999 }, 00:10:04.999 { 00:10:04.999 "name": "pt2", 00:10:04.999 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:04.999 "is_configured": true, 00:10:04.999 "data_offset": 2048, 00:10:04.999 "data_size": 63488 00:10:04.999 }, 00:10:04.999 { 00:10:04.999 "name": "pt3", 00:10:04.999 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:04.999 "is_configured": true, 00:10:04.999 "data_offset": 2048, 00:10:04.999 "data_size": 63488 00:10:04.999 } 00:10:04.999 ] 00:10:04.999 }' 00:10:04.999 19:10:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:04.999 19:10:42 -- common/autotest_common.sh@10 -- # set +x 00:10:05.567 19:10:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:05.567 19:10:42 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:05.567 [2024-02-14 19:10:42.947756] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.567 19:10:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=bf42cdc7-cb6c-11ee-af6b-4feeebbbadda 00:10:05.567 19:10:42 -- bdev/bdev_raid.sh@380 -- # '[' -z bf42cdc7-cb6c-11ee-af6b-4feeebbbadda ']' 00:10:05.567 19:10:42 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:05.826 [2024-02-14 19:10:43.215745] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.826 [2024-02-14 19:10:43.215767] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.826 [2024-02-14 19:10:43.215787] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.826 [2024-02-14 19:10:43.215802] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.826 [2024-02-14 19:10:43.215806] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d929400 name raid_bdev1, state offline 00:10:05.826 19:10:43 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.826 19:10:43 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:06.392 19:10:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:06.392 19:10:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:06.392 19:10:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.392 19:10:43 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:06.392 19:10:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.392 19:10:43 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:06.651 19:10:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.651 19:10:43 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:06.910 19:10:44 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:06.910 19:10:44 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:07.169 19:10:44 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:07.169 19:10:44 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:07.169 19:10:44 -- common/autotest_common.sh@638 -- # local es=0 00:10:07.169 19:10:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:07.169 19:10:44 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.169 19:10:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:07.169 19:10:44 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.169 19:10:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:07.169 19:10:44 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.169 19:10:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:07.169 19:10:44 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.169 19:10:44 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:07.169 19:10:44 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:07.428 [2024-02-14 19:10:44.835846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:07.428 [2024-02-14 19:10:44.836318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:07.428 [2024-02-14 19:10:44.836330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:07.428 [2024-02-14 19:10:44.836341] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:07.428 [2024-02-14 19:10:44.836386] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:07.428 [2024-02-14 19:10:44.836397] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:10:07.428 [2024-02-14 19:10:44.836405] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:07.428 [2024-02-14 19:10:44.836409] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d929180 name raid_bdev1, state configuring 00:10:07.428 request: 00:10:07.428 { 00:10:07.428 "name": "raid_bdev1", 00:10:07.428 "raid_level": "raid1", 00:10:07.428 "base_bdevs": [ 00:10:07.428 "malloc1", 00:10:07.428 "malloc2", 00:10:07.428 "malloc3" 00:10:07.428 ], 00:10:07.428 "superblock": false, 00:10:07.428 "method": "bdev_raid_create", 00:10:07.428 "req_id": 1 00:10:07.428 } 00:10:07.428 Got JSON-RPC error response 00:10:07.428 response: 00:10:07.428 { 00:10:07.428 "code": -17, 00:10:07.428 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:07.428 } 00:10:07.687 19:10:44 -- common/autotest_common.sh@641 -- # es=1 00:10:07.687 19:10:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:07.687 19:10:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:07.687 19:10:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:07.687 19:10:44 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.687 19:10:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:07.946 [2024-02-14 19:10:45.315861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:07.946 [2024-02-14 19:10:45.315911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.946 [2024-02-14 19:10:45.315958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d928c80 00:10:07.946 [2024-02-14 19:10:45.315966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.946 [2024-02-14 19:10:45.316449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.946 [2024-02-14 19:10:45.316476] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:07.946 [2024-02-14 19:10:45.316497] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:07.946 [2024-02-14 19:10:45.316507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:07.946 pt1 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.946 19:10:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.205 19:10:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:08.205 "name": "raid_bdev1", 00:10:08.205 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:08.205 "strip_size_kb": 0, 00:10:08.205 "state": "configuring", 00:10:08.205 "raid_level": "raid1", 00:10:08.205 "superblock": true, 00:10:08.205 "num_base_bdevs": 3, 00:10:08.205 "num_base_bdevs_discovered": 1, 00:10:08.205 "num_base_bdevs_operational": 3, 00:10:08.205 "base_bdevs_list": [ 00:10:08.205 { 00:10:08.205 "name": "pt1", 00:10:08.205 "uuid": "429d30a8-4498-fb57-aea6-7533025e153a", 00:10:08.205 "is_configured": true, 00:10:08.205 "data_offset": 2048, 00:10:08.205 "data_size": 63488 00:10:08.205 }, 00:10:08.205 { 00:10:08.205 "name": null, 00:10:08.205 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:08.205 "is_configured": false, 00:10:08.205 "data_offset": 2048, 00:10:08.205 "data_size": 63488 00:10:08.205 }, 00:10:08.205 { 00:10:08.205 "name": null, 00:10:08.205 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:08.205 "is_configured": false, 00:10:08.205 "data_offset": 2048, 00:10:08.205 "data_size": 63488 00:10:08.205 } 00:10:08.205 ] 00:10:08.205 }' 00:10:08.205 19:10:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:08.205 19:10:45 -- common/autotest_common.sh@10 -- # set +x 00:10:08.830 19:10:45 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:10:08.830 19:10:45 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.830 [2024-02-14 19:10:46.151924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.830 [2024-02-14 19:10:46.151980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.830 [2024-02-14 19:10:46.152007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929680 00:10:08.830 [2024-02-14 19:10:46.152015] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.830 [2024-02-14 19:10:46.152110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.830 [2024-02-14 19:10:46.152120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.830 [2024-02-14 19:10:46.152140] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:08.830 [2024-02-14 19:10:46.152147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.830 pt2 00:10:08.830 19:10:46 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:09.101 [2024-02-14 19:10:46.343914] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.101 19:10:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.360 19:10:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:09.360 "name": "raid_bdev1", 00:10:09.360 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:09.360 "strip_size_kb": 0, 00:10:09.360 "state": "configuring", 00:10:09.360 "raid_level": "raid1", 00:10:09.360 "superblock": true, 00:10:09.360 "num_base_bdevs": 3, 00:10:09.360 "num_base_bdevs_discovered": 1, 00:10:09.360 "num_base_bdevs_operational": 3, 00:10:09.360 "base_bdevs_list": [ 00:10:09.360 { 00:10:09.360 "name": "pt1", 00:10:09.360 "uuid": "429d30a8-4498-fb57-aea6-7533025e153a", 00:10:09.360 "is_configured": true, 00:10:09.360 "data_offset": 2048, 00:10:09.360 "data_size": 63488 00:10:09.360 }, 00:10:09.360 { 00:10:09.360 "name": null, 00:10:09.360 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:09.360 "is_configured": false, 00:10:09.360 "data_offset": 2048, 00:10:09.360 "data_size": 63488 00:10:09.360 }, 00:10:09.360 { 00:10:09.360 "name": null, 00:10:09.360 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:09.360 "is_configured": false, 00:10:09.360 "data_offset": 2048, 00:10:09.360 "data_size": 63488 00:10:09.360 } 00:10:09.360 ] 00:10:09.360 }' 00:10:09.360 19:10:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:09.360 19:10:46 -- common/autotest_common.sh@10 -- # set +x 00:10:09.619 19:10:46 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:09.619 19:10:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:09.619 19:10:46 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.878 [2024-02-14 19:10:47.111955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.878 [2024-02-14 19:10:47.112020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.878 [2024-02-14 19:10:47.112048] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929680 00:10:09.878 [2024-02-14 19:10:47.112055] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.878 [2024-02-14 19:10:47.112154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.878 [2024-02-14 19:10:47.112164] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.878 [2024-02-14 19:10:47.112184] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:09.878 [2024-02-14 19:10:47.112191] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.878 pt2 00:10:09.878 19:10:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:09.878 19:10:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:09.878 19:10:47 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:10.137 [2024-02-14 19:10:47.379945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:10.137 [2024-02-14 19:10:47.379980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.137 [2024-02-14 19:10:47.379997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929400 00:10:10.137 [2024-02-14 19:10:47.380004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.137 [2024-02-14 19:10:47.380062] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.137 [2024-02-14 19:10:47.380072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:10.137 [2024-02-14 19:10:47.380087] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:10.137 [2024-02-14 19:10:47.380101] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:10.137 [2024-02-14 19:10:47.380122] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d928780 00:10:10.137 [2024-02-14 19:10:47.380125] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.137 [2024-02-14 19:10:47.380142] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d98be20 00:10:10.137 [2024-02-14 19:10:47.380184] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d928780 00:10:10.137 [2024-02-14 19:10:47.380188] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d928780 00:10:10.137 [2024-02-14 19:10:47.380205] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.137 pt3 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.137 19:10:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.396 19:10:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:10.396 "name": "raid_bdev1", 00:10:10.396 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:10.396 "strip_size_kb": 0, 00:10:10.396 "state": "online", 00:10:10.396 "raid_level": "raid1", 00:10:10.396 "superblock": true, 00:10:10.396 "num_base_bdevs": 3, 00:10:10.396 "num_base_bdevs_discovered": 3, 00:10:10.396 "num_base_bdevs_operational": 3, 00:10:10.396 "base_bdevs_list": [ 00:10:10.396 { 00:10:10.396 "name": "pt1", 00:10:10.396 "uuid": "429d30a8-4498-fb57-aea6-7533025e153a", 00:10:10.396 "is_configured": true, 00:10:10.396 "data_offset": 2048, 00:10:10.396 "data_size": 63488 00:10:10.396 }, 00:10:10.396 { 00:10:10.396 "name": "pt2", 00:10:10.396 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:10.396 "is_configured": true, 00:10:10.396 "data_offset": 2048, 00:10:10.396 "data_size": 63488 00:10:10.396 }, 00:10:10.396 { 00:10:10.396 "name": "pt3", 00:10:10.396 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:10.396 "is_configured": true, 00:10:10.396 "data_offset": 2048, 00:10:10.396 "data_size": 63488 00:10:10.396 } 00:10:10.396 ] 00:10:10.396 }' 00:10:10.396 19:10:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:10.396 19:10:47 -- common/autotest_common.sh@10 -- # set +x 00:10:10.655 19:10:48 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:10.655 19:10:48 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:10.915 [2024-02-14 19:10:48.232011] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.915 19:10:48 -- bdev/bdev_raid.sh@430 -- # '[' bf42cdc7-cb6c-11ee-af6b-4feeebbbadda '!=' bf42cdc7-cb6c-11ee-af6b-4feeebbbadda ']' 00:10:10.915 19:10:48 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:10:10.915 19:10:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:10.915 19:10:48 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:10.915 19:10:48 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:11.174 [2024-02-14 19:10:48.399974] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.174 19:10:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.433 19:10:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:11.433 "name": "raid_bdev1", 00:10:11.433 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:11.433 "strip_size_kb": 0, 00:10:11.433 "state": "online", 00:10:11.433 "raid_level": "raid1", 00:10:11.433 "superblock": true, 00:10:11.433 "num_base_bdevs": 3, 00:10:11.433 "num_base_bdevs_discovered": 2, 00:10:11.433 "num_base_bdevs_operational": 2, 00:10:11.433 "base_bdevs_list": [ 00:10:11.433 { 00:10:11.433 "name": null, 00:10:11.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.433 "is_configured": false, 00:10:11.433 "data_offset": 2048, 00:10:11.433 "data_size": 63488 00:10:11.433 }, 00:10:11.433 { 00:10:11.433 "name": "pt2", 00:10:11.433 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:11.433 "is_configured": true, 00:10:11.433 "data_offset": 2048, 00:10:11.433 "data_size": 63488 00:10:11.433 }, 00:10:11.433 { 00:10:11.433 "name": "pt3", 00:10:11.433 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:11.433 "is_configured": true, 00:10:11.433 "data_offset": 2048, 00:10:11.433 "data_size": 63488 00:10:11.433 } 00:10:11.433 ] 00:10:11.433 }' 00:10:11.433 19:10:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:11.433 19:10:48 -- common/autotest_common.sh@10 -- # set +x 00:10:11.692 19:10:48 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:11.951 [2024-02-14 19:10:49.192003] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.951 [2024-02-14 19:10:49.192029] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.951 [2024-02-14 19:10:49.192051] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.951 [2024-02-14 19:10:49.192065] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.951 [2024-02-14 19:10:49.192069] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d928780 name raid_bdev1, state offline 00:10:11.951 19:10:49 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.951 19:10:49 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:10:12.210 19:10:49 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:10:12.210 19:10:49 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:10:12.210 19:10:49 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:10:12.210 19:10:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:12.210 19:10:49 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:12.469 19:10:49 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:10:12.469 19:10:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:12.469 19:10:49 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:12.469 19:10:49 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:10:12.727 19:10:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:12.727 19:10:49 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:10:12.727 19:10:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:10:12.727 19:10:49 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.727 [2024-02-14 19:10:50.072059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.727 [2024-02-14 19:10:50.072114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.727 [2024-02-14 19:10:50.072142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929400 00:10:12.727 [2024-02-14 19:10:50.072151] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.727 [2024-02-14 19:10:50.072697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.727 [2024-02-14 19:10:50.072733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.727 [2024-02-14 19:10:50.072756] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:12.727 [2024-02-14 19:10:50.072767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.727 pt2 00:10:12.727 19:10:50 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:12.727 19:10:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:12.727 19:10:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:12.727 19:10:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:12.727 19:10:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:12.727 19:10:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:12.728 19:10:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:12.728 19:10:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:12.728 19:10:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:12.728 19:10:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:12.728 19:10:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.728 19:10:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.986 19:10:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:12.986 "name": "raid_bdev1", 00:10:12.986 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:12.986 "strip_size_kb": 0, 00:10:12.986 "state": "configuring", 00:10:12.986 "raid_level": "raid1", 00:10:12.986 "superblock": true, 00:10:12.986 "num_base_bdevs": 3, 00:10:12.986 "num_base_bdevs_discovered": 1, 00:10:12.986 "num_base_bdevs_operational": 2, 00:10:12.986 "base_bdevs_list": [ 00:10:12.986 { 00:10:12.986 "name": null, 00:10:12.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.986 "is_configured": false, 00:10:12.986 "data_offset": 2048, 00:10:12.986 "data_size": 63488 00:10:12.986 }, 00:10:12.986 { 00:10:12.986 "name": "pt2", 00:10:12.986 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:12.986 "is_configured": true, 00:10:12.986 "data_offset": 2048, 00:10:12.986 "data_size": 63488 00:10:12.986 }, 00:10:12.986 { 00:10:12.986 "name": null, 00:10:12.986 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:12.986 "is_configured": false, 00:10:12.986 "data_offset": 2048, 00:10:12.986 "data_size": 63488 00:10:12.986 } 00:10:12.986 ] 00:10:12.986 }' 00:10:12.986 19:10:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:12.986 19:10:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@462 -- # i=2 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.554 [2024-02-14 19:10:50.912129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.554 [2024-02-14 19:10:50.912177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.554 [2024-02-14 19:10:50.912203] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d928780 00:10:13.554 [2024-02-14 19:10:50.912211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.554 [2024-02-14 19:10:50.912298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.554 [2024-02-14 19:10:50.912308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.554 [2024-02-14 19:10:50.912326] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:13.554 [2024-02-14 19:10:50.912332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.554 [2024-02-14 19:10:50.912354] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d929180 00:10:13.554 [2024-02-14 19:10:50.912358] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.554 [2024-02-14 19:10:50.912374] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d98be20 00:10:13.554 [2024-02-14 19:10:50.912405] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d929180 00:10:13.554 [2024-02-14 19:10:50.912409] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d929180 00:10:13.554 [2024-02-14 19:10:50.912425] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.554 pt3 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.554 19:10:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.814 19:10:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:13.814 "name": "raid_bdev1", 00:10:13.814 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:13.814 "strip_size_kb": 0, 00:10:13.814 "state": "online", 00:10:13.814 "raid_level": "raid1", 00:10:13.814 "superblock": true, 00:10:13.814 "num_base_bdevs": 3, 00:10:13.814 "num_base_bdevs_discovered": 2, 00:10:13.814 "num_base_bdevs_operational": 2, 00:10:13.814 "base_bdevs_list": [ 00:10:13.814 { 00:10:13.814 "name": null, 00:10:13.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.814 "is_configured": false, 00:10:13.814 "data_offset": 2048, 00:10:13.814 "data_size": 63488 00:10:13.814 }, 00:10:13.814 { 00:10:13.814 "name": "pt2", 00:10:13.814 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:13.814 "is_configured": true, 00:10:13.814 "data_offset": 2048, 00:10:13.814 "data_size": 63488 00:10:13.814 }, 00:10:13.814 { 00:10:13.814 "name": "pt3", 00:10:13.814 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:13.814 "is_configured": true, 00:10:13.814 "data_offset": 2048, 00:10:13.814 "data_size": 63488 00:10:13.814 } 00:10:13.814 ] 00:10:13.814 }' 00:10:13.814 19:10:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:13.814 19:10:51 -- common/autotest_common.sh@10 -- # set +x 00:10:14.073 19:10:51 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:10:14.073 19:10:51 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:14.332 [2024-02-14 19:10:51.684138] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.332 [2024-02-14 19:10:51.684162] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.332 [2024-02-14 19:10:51.684183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.332 [2024-02-14 19:10:51.684196] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.332 [2024-02-14 19:10:51.684200] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d929180 name raid_bdev1, state offline 00:10:14.332 19:10:51 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.332 19:10:51 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:10:14.591 19:10:51 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:10:14.591 19:10:51 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:10:14.591 19:10:51 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.850 [2024-02-14 19:10:52.112210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.850 [2024-02-14 19:10:52.112269] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.850 [2024-02-14 19:10:52.112298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929680 00:10:14.851 [2024-02-14 19:10:52.112306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.851 [2024-02-14 19:10:52.112843] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.851 [2024-02-14 19:10:52.112875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.851 [2024-02-14 19:10:52.112915] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:14.851 [2024-02-14 19:10:52.112926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.851 pt1 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.851 19:10:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.110 19:10:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:15.110 "name": "raid_bdev1", 00:10:15.110 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:15.110 "strip_size_kb": 0, 00:10:15.110 "state": "configuring", 00:10:15.110 "raid_level": "raid1", 00:10:15.110 "superblock": true, 00:10:15.110 "num_base_bdevs": 3, 00:10:15.110 "num_base_bdevs_discovered": 1, 00:10:15.110 "num_base_bdevs_operational": 3, 00:10:15.110 "base_bdevs_list": [ 00:10:15.110 { 00:10:15.110 "name": "pt1", 00:10:15.110 "uuid": "429d30a8-4498-fb57-aea6-7533025e153a", 00:10:15.110 "is_configured": true, 00:10:15.110 "data_offset": 2048, 00:10:15.110 "data_size": 63488 00:10:15.110 }, 00:10:15.110 { 00:10:15.110 "name": null, 00:10:15.110 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:15.110 "is_configured": false, 00:10:15.110 "data_offset": 2048, 00:10:15.110 "data_size": 63488 00:10:15.110 }, 00:10:15.110 { 00:10:15.110 "name": null, 00:10:15.110 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:15.110 "is_configured": false, 00:10:15.110 "data_offset": 2048, 00:10:15.110 "data_size": 63488 00:10:15.110 } 00:10:15.110 ] 00:10:15.110 }' 00:10:15.110 19:10:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:15.110 19:10:52 -- common/autotest_common.sh@10 -- # set +x 00:10:15.369 19:10:52 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:10:15.369 19:10:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:15.369 19:10:52 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:15.369 19:10:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:10:15.369 19:10:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:15.369 19:10:52 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:15.936 19:10:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:10:15.936 19:10:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:15.936 19:10:53 -- bdev/bdev_raid.sh@489 -- # i=2 00:10:15.936 19:10:53 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.936 [2024-02-14 19:10:53.264327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.936 [2024-02-14 19:10:53.264385] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.936 [2024-02-14 19:10:53.264412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d928780 00:10:15.936 [2024-02-14 19:10:53.264420] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.936 [2024-02-14 19:10:53.264514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.937 [2024-02-14 19:10:53.264523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.937 [2024-02-14 19:10:53.264559] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:15.937 [2024-02-14 19:10:53.264565] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:15.937 [2024-02-14 19:10:53.264569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.937 [2024-02-14 19:10:53.264575] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d928c80 name raid_bdev1, state configuring 00:10:15.937 [2024-02-14 19:10:53.264587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.937 pt3 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.937 19:10:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.196 19:10:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:16.196 "name": "raid_bdev1", 00:10:16.196 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:16.196 "strip_size_kb": 0, 00:10:16.196 "state": "configuring", 00:10:16.196 "raid_level": "raid1", 00:10:16.196 "superblock": true, 00:10:16.196 "num_base_bdevs": 3, 00:10:16.196 "num_base_bdevs_discovered": 1, 00:10:16.196 "num_base_bdevs_operational": 2, 00:10:16.196 "base_bdevs_list": [ 00:10:16.196 { 00:10:16.196 "name": null, 00:10:16.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.196 "is_configured": false, 00:10:16.196 "data_offset": 2048, 00:10:16.196 "data_size": 63488 00:10:16.196 }, 00:10:16.196 { 00:10:16.196 "name": null, 00:10:16.196 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:16.196 "is_configured": false, 00:10:16.196 "data_offset": 2048, 00:10:16.196 "data_size": 63488 00:10:16.196 }, 00:10:16.196 { 00:10:16.196 "name": "pt3", 00:10:16.196 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:16.196 "is_configured": true, 00:10:16.196 "data_offset": 2048, 00:10:16.196 "data_size": 63488 00:10:16.196 } 00:10:16.196 ] 00:10:16.196 }' 00:10:16.196 19:10:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:16.196 19:10:53 -- common/autotest_common.sh@10 -- # set +x 00:10:16.454 19:10:53 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:10:16.454 19:10:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:16.454 19:10:53 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.713 [2024-02-14 19:10:54.028327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.713 [2024-02-14 19:10:54.028377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.713 [2024-02-14 19:10:54.028404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d929400 00:10:16.713 [2024-02-14 19:10:54.028422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.713 [2024-02-14 19:10:54.028514] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.713 [2024-02-14 19:10:54.028523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.713 [2024-02-14 19:10:54.028541] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:16.713 [2024-02-14 19:10:54.028547] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.713 [2024-02-14 19:10:54.028569] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d928c80 00:10:16.713 [2024-02-14 19:10:54.028573] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.713 [2024-02-14 19:10:54.028605] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d98be20 00:10:16.713 [2024-02-14 19:10:54.028638] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d928c80 00:10:16.713 [2024-02-14 19:10:54.028641] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d928c80 00:10:16.713 [2024-02-14 19:10:54.028666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.713 pt2 00:10:16.713 19:10:54 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:10:16.713 19:10:54 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:16.713 19:10:54 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:16.713 19:10:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.714 19:10:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.972 19:10:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:16.972 "name": "raid_bdev1", 00:10:16.972 "uuid": "bf42cdc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:16.972 "strip_size_kb": 0, 00:10:16.972 "state": "online", 00:10:16.972 "raid_level": "raid1", 00:10:16.972 "superblock": true, 00:10:16.972 "num_base_bdevs": 3, 00:10:16.972 "num_base_bdevs_discovered": 2, 00:10:16.972 "num_base_bdevs_operational": 2, 00:10:16.972 "base_bdevs_list": [ 00:10:16.972 { 00:10:16.972 "name": null, 00:10:16.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.972 "is_configured": false, 00:10:16.972 "data_offset": 2048, 00:10:16.972 "data_size": 63488 00:10:16.972 }, 00:10:16.972 { 00:10:16.972 "name": "pt2", 00:10:16.972 "uuid": "5b9f927c-316e-b45c-abe1-17e2aea47851", 00:10:16.972 "is_configured": true, 00:10:16.972 "data_offset": 2048, 00:10:16.972 "data_size": 63488 00:10:16.972 }, 00:10:16.972 { 00:10:16.972 "name": "pt3", 00:10:16.972 "uuid": "1906e7b4-6323-ad5f-bf2f-2798d163be79", 00:10:16.972 "is_configured": true, 00:10:16.972 "data_offset": 2048, 00:10:16.972 "data_size": 63488 00:10:16.972 } 00:10:16.972 ] 00:10:16.972 }' 00:10:16.972 19:10:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:16.972 19:10:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.231 19:10:54 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:17.231 19:10:54 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:10:17.490 [2024-02-14 19:10:54.688454] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.490 19:10:54 -- bdev/bdev_raid.sh@506 -- # '[' bf42cdc7-cb6c-11ee-af6b-4feeebbbadda '!=' bf42cdc7-cb6c-11ee-af6b-4feeebbbadda ']' 00:10:17.490 19:10:54 -- bdev/bdev_raid.sh@511 -- # killprocess 52194 00:10:17.490 19:10:54 -- common/autotest_common.sh@924 -- # '[' -z 52194 ']' 00:10:17.490 19:10:54 -- common/autotest_common.sh@928 -- # kill -0 52194 00:10:17.490 19:10:54 -- common/autotest_common.sh@929 -- # uname 00:10:17.490 19:10:54 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:10:17.490 19:10:54 -- common/autotest_common.sh@932 -- # ps -c -o command 52194 00:10:17.490 19:10:54 -- common/autotest_common.sh@932 -- # tail -1 00:10:17.490 19:10:54 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:10:17.490 19:10:54 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:10:17.490 killing process with pid 52194 00:10:17.490 19:10:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 52194' 00:10:17.490 19:10:54 -- common/autotest_common.sh@943 -- # kill 52194 00:10:17.490 [2024-02-14 19:10:54.714110] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.490 [2024-02-14 19:10:54.714145] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.490 [2024-02-14 19:10:54.714165] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.490 [2024-02-14 19:10:54.714170] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d928c80 name raid_bdev1, state offline 00:10:17.490 19:10:54 -- common/autotest_common.sh@948 -- # wait 52194 00:10:17.490 [2024-02-14 19:10:54.741612] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.749 19:10:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:17.749 00:10:17.749 real 0m15.635s 00:10:17.749 user 0m27.301s 00:10:17.749 sys 0m3.141s 00:10:17.749 19:10:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.749 19:10:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.749 ************************************ 00:10:17.749 END TEST raid_superblock_test 00:10:17.749 ************************************ 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:17.749 19:10:55 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:10:17.749 19:10:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:17.749 19:10:55 -- common/autotest_common.sh@10 -- # set +x 00:10:17.749 ************************************ 00:10:17.749 START TEST raid_state_function_test 00:10:17.749 ************************************ 00:10:17.749 19:10:55 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 4 false 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:17.749 19:10:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=52576 00:10:17.750 Process raid pid: 52576 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52576' 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52576 /var/tmp/spdk-raid.sock 00:10:17.750 19:10:55 -- common/autotest_common.sh@817 -- # '[' -z 52576 ']' 00:10:17.750 19:10:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:17.750 19:10:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:17.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:17.750 19:10:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:17.750 19:10:55 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:17.750 19:10:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:17.750 19:10:55 -- common/autotest_common.sh@10 -- # set +x 00:10:17.750 [2024-02-14 19:10:55.044604] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:17.750 [2024-02-14 19:10:55.044893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:18.320 EAL: TSC is not safe to use in SMP mode 00:10:18.320 EAL: TSC is not invariant 00:10:18.320 [2024-02-14 19:10:55.520070] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.320 [2024-02-14 19:10:55.636851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.320 [2024-02-14 19:10:55.637372] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.320 [2024-02-14 19:10:55.637383] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.578 19:10:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:18.578 19:10:55 -- common/autotest_common.sh@850 -- # return 0 00:10:18.578 19:10:55 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:18.836 [2024-02-14 19:10:56.084441] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.836 [2024-02-14 19:10:56.084510] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.836 [2024-02-14 19:10:56.084515] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.836 [2024-02-14 19:10:56.084524] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.836 [2024-02-14 19:10:56.084527] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.836 [2024-02-14 19:10:56.084534] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.836 [2024-02-14 19:10:56.084537] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.836 [2024-02-14 19:10:56.084545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.836 19:10:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.095 19:10:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:19.095 "name": "Existed_Raid", 00:10:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.095 "strip_size_kb": 64, 00:10:19.095 "state": "configuring", 00:10:19.095 "raid_level": "raid0", 00:10:19.095 "superblock": false, 00:10:19.095 "num_base_bdevs": 4, 00:10:19.095 "num_base_bdevs_discovered": 0, 00:10:19.095 "num_base_bdevs_operational": 4, 00:10:19.095 "base_bdevs_list": [ 00:10:19.095 { 00:10:19.095 "name": "BaseBdev1", 00:10:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.095 "is_configured": false, 00:10:19.095 "data_offset": 0, 00:10:19.095 "data_size": 0 00:10:19.095 }, 00:10:19.095 { 00:10:19.095 "name": "BaseBdev2", 00:10:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.095 "is_configured": false, 00:10:19.095 "data_offset": 0, 00:10:19.095 "data_size": 0 00:10:19.095 }, 00:10:19.095 { 00:10:19.095 "name": "BaseBdev3", 00:10:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.095 "is_configured": false, 00:10:19.095 "data_offset": 0, 00:10:19.095 "data_size": 0 00:10:19.095 }, 00:10:19.095 { 00:10:19.095 "name": "BaseBdev4", 00:10:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.095 "is_configured": false, 00:10:19.095 "data_offset": 0, 00:10:19.095 "data_size": 0 00:10:19.095 } 00:10:19.095 ] 00:10:19.095 }' 00:10:19.095 19:10:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:19.095 19:10:56 -- common/autotest_common.sh@10 -- # set +x 00:10:19.353 19:10:56 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:19.612 [2024-02-14 19:10:56.868422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.612 [2024-02-14 19:10:56.868450] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b688500 name Existed_Raid, state configuring 00:10:19.612 19:10:56 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:19.871 [2024-02-14 19:10:57.100477] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.871 [2024-02-14 19:10:57.100538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.871 [2024-02-14 19:10:57.100543] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.871 [2024-02-14 19:10:57.100551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.871 [2024-02-14 19:10:57.100555] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.871 [2024-02-14 19:10:57.100563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.871 [2024-02-14 19:10:57.100566] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:19.871 [2024-02-14 19:10:57.100573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:19.871 19:10:57 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:20.130 [2024-02-14 19:10:57.337789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:20.130 BaseBdev1 00:10:20.130 19:10:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:20.130 19:10:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:10:20.130 19:10:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:20.130 19:10:57 -- common/autotest_common.sh@887 -- # local i 00:10:20.130 19:10:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:20.130 19:10:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:20.130 19:10:57 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.389 19:10:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:20.389 [ 00:10:20.389 { 00:10:20.389 "name": "BaseBdev1", 00:10:20.389 "aliases": [ 00:10:20.389 "c85ace61-cb6c-11ee-af6b-4feeebbbadda" 00:10:20.389 ], 00:10:20.389 "product_name": "Malloc disk", 00:10:20.389 "block_size": 512, 00:10:20.389 "num_blocks": 65536, 00:10:20.389 "uuid": "c85ace61-cb6c-11ee-af6b-4feeebbbadda", 00:10:20.389 "assigned_rate_limits": { 00:10:20.389 "rw_ios_per_sec": 0, 00:10:20.389 "rw_mbytes_per_sec": 0, 00:10:20.389 "r_mbytes_per_sec": 0, 00:10:20.389 "w_mbytes_per_sec": 0 00:10:20.389 }, 00:10:20.389 "claimed": true, 00:10:20.389 "claim_type": "exclusive_write", 00:10:20.389 "zoned": false, 00:10:20.389 "supported_io_types": { 00:10:20.389 "read": true, 00:10:20.389 "write": true, 00:10:20.389 "unmap": true, 00:10:20.389 "write_zeroes": true, 00:10:20.389 "flush": true, 00:10:20.389 "reset": true, 00:10:20.389 "compare": false, 00:10:20.389 "compare_and_write": false, 00:10:20.389 "abort": true, 00:10:20.389 "nvme_admin": false, 00:10:20.389 "nvme_io": false 00:10:20.389 }, 00:10:20.389 "memory_domains": [ 00:10:20.389 { 00:10:20.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.389 "dma_device_type": 2 00:10:20.389 } 00:10:20.389 ], 00:10:20.389 "driver_specific": {} 00:10:20.389 } 00:10:20.389 ] 00:10:20.389 19:10:57 -- common/autotest_common.sh@893 -- # return 0 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.389 19:10:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.648 19:10:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:20.648 "name": "Existed_Raid", 00:10:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.648 "strip_size_kb": 64, 00:10:20.648 "state": "configuring", 00:10:20.648 "raid_level": "raid0", 00:10:20.648 "superblock": false, 00:10:20.648 "num_base_bdevs": 4, 00:10:20.648 "num_base_bdevs_discovered": 1, 00:10:20.648 "num_base_bdevs_operational": 4, 00:10:20.648 "base_bdevs_list": [ 00:10:20.648 { 00:10:20.648 "name": "BaseBdev1", 00:10:20.648 "uuid": "c85ace61-cb6c-11ee-af6b-4feeebbbadda", 00:10:20.648 "is_configured": true, 00:10:20.648 "data_offset": 0, 00:10:20.648 "data_size": 65536 00:10:20.648 }, 00:10:20.648 { 00:10:20.648 "name": "BaseBdev2", 00:10:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.648 "is_configured": false, 00:10:20.648 "data_offset": 0, 00:10:20.648 "data_size": 0 00:10:20.648 }, 00:10:20.648 { 00:10:20.648 "name": "BaseBdev3", 00:10:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.648 "is_configured": false, 00:10:20.648 "data_offset": 0, 00:10:20.648 "data_size": 0 00:10:20.648 }, 00:10:20.648 { 00:10:20.648 "name": "BaseBdev4", 00:10:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.648 "is_configured": false, 00:10:20.648 "data_offset": 0, 00:10:20.648 "data_size": 0 00:10:20.648 } 00:10:20.648 ] 00:10:20.648 }' 00:10:20.648 19:10:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:20.648 19:10:57 -- common/autotest_common.sh@10 -- # set +x 00:10:20.906 19:10:58 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:21.163 [2024-02-14 19:10:58.436519] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.163 [2024-02-14 19:10:58.436566] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b688500 name Existed_Raid, state configuring 00:10:21.163 19:10:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:21.163 19:10:58 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:21.422 [2024-02-14 19:10:58.608559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.422 [2024-02-14 19:10:58.609673] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.422 [2024-02-14 19:10:58.609723] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.422 [2024-02-14 19:10:58.609727] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.422 [2024-02-14 19:10:58.609736] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.422 [2024-02-14 19:10:58.609740] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:21.422 [2024-02-14 19:10:58.609747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:21.422 "name": "Existed_Raid", 00:10:21.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.422 "strip_size_kb": 64, 00:10:21.422 "state": "configuring", 00:10:21.422 "raid_level": "raid0", 00:10:21.422 "superblock": false, 00:10:21.422 "num_base_bdevs": 4, 00:10:21.422 "num_base_bdevs_discovered": 1, 00:10:21.422 "num_base_bdevs_operational": 4, 00:10:21.422 "base_bdevs_list": [ 00:10:21.422 { 00:10:21.422 "name": "BaseBdev1", 00:10:21.422 "uuid": "c85ace61-cb6c-11ee-af6b-4feeebbbadda", 00:10:21.422 "is_configured": true, 00:10:21.422 "data_offset": 0, 00:10:21.422 "data_size": 65536 00:10:21.422 }, 00:10:21.422 { 00:10:21.422 "name": "BaseBdev2", 00:10:21.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.422 "is_configured": false, 00:10:21.422 "data_offset": 0, 00:10:21.422 "data_size": 0 00:10:21.422 }, 00:10:21.422 { 00:10:21.422 "name": "BaseBdev3", 00:10:21.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.422 "is_configured": false, 00:10:21.422 "data_offset": 0, 00:10:21.422 "data_size": 0 00:10:21.422 }, 00:10:21.422 { 00:10:21.422 "name": "BaseBdev4", 00:10:21.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.422 "is_configured": false, 00:10:21.422 "data_offset": 0, 00:10:21.422 "data_size": 0 00:10:21.422 } 00:10:21.422 ] 00:10:21.422 }' 00:10:21.422 19:10:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:21.422 19:10:58 -- common/autotest_common.sh@10 -- # set +x 00:10:21.680 19:10:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:21.938 [2024-02-14 19:10:59.240798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.938 BaseBdev2 00:10:21.938 19:10:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:21.938 19:10:59 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:10:21.938 19:10:59 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:21.938 19:10:59 -- common/autotest_common.sh@887 -- # local i 00:10:21.938 19:10:59 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:21.938 19:10:59 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:21.938 19:10:59 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:22.195 19:10:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.453 [ 00:10:22.453 { 00:10:22.453 "name": "BaseBdev2", 00:10:22.453 "aliases": [ 00:10:22.453 "c97d5a23-cb6c-11ee-af6b-4feeebbbadda" 00:10:22.453 ], 00:10:22.453 "product_name": "Malloc disk", 00:10:22.453 "block_size": 512, 00:10:22.453 "num_blocks": 65536, 00:10:22.453 "uuid": "c97d5a23-cb6c-11ee-af6b-4feeebbbadda", 00:10:22.453 "assigned_rate_limits": { 00:10:22.453 "rw_ios_per_sec": 0, 00:10:22.453 "rw_mbytes_per_sec": 0, 00:10:22.453 "r_mbytes_per_sec": 0, 00:10:22.453 "w_mbytes_per_sec": 0 00:10:22.453 }, 00:10:22.453 "claimed": true, 00:10:22.453 "claim_type": "exclusive_write", 00:10:22.453 "zoned": false, 00:10:22.453 "supported_io_types": { 00:10:22.453 "read": true, 00:10:22.453 "write": true, 00:10:22.453 "unmap": true, 00:10:22.453 "write_zeroes": true, 00:10:22.453 "flush": true, 00:10:22.453 "reset": true, 00:10:22.453 "compare": false, 00:10:22.453 "compare_and_write": false, 00:10:22.453 "abort": true, 00:10:22.453 "nvme_admin": false, 00:10:22.453 "nvme_io": false 00:10:22.453 }, 00:10:22.453 "memory_domains": [ 00:10:22.453 { 00:10:22.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.453 "dma_device_type": 2 00:10:22.453 } 00:10:22.453 ], 00:10:22.453 "driver_specific": {} 00:10:22.453 } 00:10:22.453 ] 00:10:22.453 19:10:59 -- common/autotest_common.sh@893 -- # return 0 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.453 19:10:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.454 19:10:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:22.454 "name": "Existed_Raid", 00:10:22.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.454 "strip_size_kb": 64, 00:10:22.454 "state": "configuring", 00:10:22.454 "raid_level": "raid0", 00:10:22.454 "superblock": false, 00:10:22.454 "num_base_bdevs": 4, 00:10:22.454 "num_base_bdevs_discovered": 2, 00:10:22.454 "num_base_bdevs_operational": 4, 00:10:22.454 "base_bdevs_list": [ 00:10:22.454 { 00:10:22.454 "name": "BaseBdev1", 00:10:22.454 "uuid": "c85ace61-cb6c-11ee-af6b-4feeebbbadda", 00:10:22.454 "is_configured": true, 00:10:22.454 "data_offset": 0, 00:10:22.454 "data_size": 65536 00:10:22.454 }, 00:10:22.454 { 00:10:22.454 "name": "BaseBdev2", 00:10:22.454 "uuid": "c97d5a23-cb6c-11ee-af6b-4feeebbbadda", 00:10:22.454 "is_configured": true, 00:10:22.454 "data_offset": 0, 00:10:22.454 "data_size": 65536 00:10:22.454 }, 00:10:22.454 { 00:10:22.454 "name": "BaseBdev3", 00:10:22.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.454 "is_configured": false, 00:10:22.454 "data_offset": 0, 00:10:22.454 "data_size": 0 00:10:22.454 }, 00:10:22.454 { 00:10:22.454 "name": "BaseBdev4", 00:10:22.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.454 "is_configured": false, 00:10:22.454 "data_offset": 0, 00:10:22.454 "data_size": 0 00:10:22.454 } 00:10:22.454 ] 00:10:22.454 }' 00:10:22.454 19:10:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:22.454 19:10:59 -- common/autotest_common.sh@10 -- # set +x 00:10:23.017 19:11:00 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:23.017 [2024-02-14 19:11:00.400853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.017 BaseBdev3 00:10:23.017 19:11:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:23.017 19:11:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:10:23.017 19:11:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:23.018 19:11:00 -- common/autotest_common.sh@887 -- # local i 00:10:23.018 19:11:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:23.018 19:11:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:23.018 19:11:00 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:23.275 19:11:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.841 [ 00:10:23.841 { 00:10:23.841 "name": "BaseBdev3", 00:10:23.841 "aliases": [ 00:10:23.841 "ca2e5ecd-cb6c-11ee-af6b-4feeebbbadda" 00:10:23.841 ], 00:10:23.841 "product_name": "Malloc disk", 00:10:23.841 "block_size": 512, 00:10:23.841 "num_blocks": 65536, 00:10:23.841 "uuid": "ca2e5ecd-cb6c-11ee-af6b-4feeebbbadda", 00:10:23.841 "assigned_rate_limits": { 00:10:23.841 "rw_ios_per_sec": 0, 00:10:23.841 "rw_mbytes_per_sec": 0, 00:10:23.841 "r_mbytes_per_sec": 0, 00:10:23.841 "w_mbytes_per_sec": 0 00:10:23.841 }, 00:10:23.841 "claimed": true, 00:10:23.841 "claim_type": "exclusive_write", 00:10:23.841 "zoned": false, 00:10:23.841 "supported_io_types": { 00:10:23.841 "read": true, 00:10:23.841 "write": true, 00:10:23.841 "unmap": true, 00:10:23.841 "write_zeroes": true, 00:10:23.841 "flush": true, 00:10:23.841 "reset": true, 00:10:23.841 "compare": false, 00:10:23.841 "compare_and_write": false, 00:10:23.841 "abort": true, 00:10:23.841 "nvme_admin": false, 00:10:23.841 "nvme_io": false 00:10:23.841 }, 00:10:23.841 "memory_domains": [ 00:10:23.841 { 00:10:23.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.841 "dma_device_type": 2 00:10:23.841 } 00:10:23.841 ], 00:10:23.841 "driver_specific": {} 00:10:23.841 } 00:10:23.841 ] 00:10:23.841 19:11:00 -- common/autotest_common.sh@893 -- # return 0 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.841 19:11:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.841 19:11:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:23.841 "name": "Existed_Raid", 00:10:23.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.841 "strip_size_kb": 64, 00:10:23.841 "state": "configuring", 00:10:23.841 "raid_level": "raid0", 00:10:23.841 "superblock": false, 00:10:23.841 "num_base_bdevs": 4, 00:10:23.841 "num_base_bdevs_discovered": 3, 00:10:23.841 "num_base_bdevs_operational": 4, 00:10:23.841 "base_bdevs_list": [ 00:10:23.841 { 00:10:23.841 "name": "BaseBdev1", 00:10:23.841 "uuid": "c85ace61-cb6c-11ee-af6b-4feeebbbadda", 00:10:23.841 "is_configured": true, 00:10:23.841 "data_offset": 0, 00:10:23.841 "data_size": 65536 00:10:23.841 }, 00:10:23.841 { 00:10:23.841 "name": "BaseBdev2", 00:10:23.841 "uuid": "c97d5a23-cb6c-11ee-af6b-4feeebbbadda", 00:10:23.841 "is_configured": true, 00:10:23.841 "data_offset": 0, 00:10:23.841 "data_size": 65536 00:10:23.841 }, 00:10:23.841 { 00:10:23.841 "name": "BaseBdev3", 00:10:23.841 "uuid": "ca2e5ecd-cb6c-11ee-af6b-4feeebbbadda", 00:10:23.841 "is_configured": true, 00:10:23.841 "data_offset": 0, 00:10:23.841 "data_size": 65536 00:10:23.841 }, 00:10:23.841 { 00:10:23.841 "name": "BaseBdev4", 00:10:23.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.841 "is_configured": false, 00:10:23.841 "data_offset": 0, 00:10:23.841 "data_size": 0 00:10:23.841 } 00:10:23.841 ] 00:10:23.841 }' 00:10:23.841 19:11:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:23.842 19:11:01 -- common/autotest_common.sh@10 -- # set +x 00:10:24.099 19:11:01 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:10:24.357 [2024-02-14 19:11:01.624947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.357 [2024-02-14 19:11:01.624980] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b688a00 00:10:24.357 [2024-02-14 19:11:01.624985] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:24.357 [2024-02-14 19:11:01.625009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b6ebec0 00:10:24.357 [2024-02-14 19:11:01.625155] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b688a00 00:10:24.357 [2024-02-14 19:11:01.625162] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b688a00 00:10:24.357 [2024-02-14 19:11:01.625213] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.357 BaseBdev4 00:10:24.357 19:11:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:10:24.357 19:11:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:10:24.357 19:11:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:24.357 19:11:01 -- common/autotest_common.sh@887 -- # local i 00:10:24.357 19:11:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:24.357 19:11:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:24.357 19:11:01 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:24.615 19:11:01 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:24.872 [ 00:10:24.872 { 00:10:24.872 "name": "BaseBdev4", 00:10:24.872 "aliases": [ 00:10:24.872 "cae92659-cb6c-11ee-af6b-4feeebbbadda" 00:10:24.872 ], 00:10:24.872 "product_name": "Malloc disk", 00:10:24.872 "block_size": 512, 00:10:24.872 "num_blocks": 65536, 00:10:24.872 "uuid": "cae92659-cb6c-11ee-af6b-4feeebbbadda", 00:10:24.872 "assigned_rate_limits": { 00:10:24.872 "rw_ios_per_sec": 0, 00:10:24.872 "rw_mbytes_per_sec": 0, 00:10:24.872 "r_mbytes_per_sec": 0, 00:10:24.872 "w_mbytes_per_sec": 0 00:10:24.872 }, 00:10:24.872 "claimed": true, 00:10:24.872 "claim_type": "exclusive_write", 00:10:24.872 "zoned": false, 00:10:24.872 "supported_io_types": { 00:10:24.872 "read": true, 00:10:24.872 "write": true, 00:10:24.872 "unmap": true, 00:10:24.872 "write_zeroes": true, 00:10:24.872 "flush": true, 00:10:24.872 "reset": true, 00:10:24.872 "compare": false, 00:10:24.872 "compare_and_write": false, 00:10:24.872 "abort": true, 00:10:24.872 "nvme_admin": false, 00:10:24.872 "nvme_io": false 00:10:24.872 }, 00:10:24.872 "memory_domains": [ 00:10:24.872 { 00:10:24.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.872 "dma_device_type": 2 00:10:24.872 } 00:10:24.872 ], 00:10:24.872 "driver_specific": {} 00:10:24.872 } 00:10:24.872 ] 00:10:24.872 19:11:02 -- common/autotest_common.sh@893 -- # return 0 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.872 19:11:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.129 19:11:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:25.129 "name": "Existed_Raid", 00:10:25.129 "uuid": "cae92e3b-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.129 "strip_size_kb": 64, 00:10:25.129 "state": "online", 00:10:25.129 "raid_level": "raid0", 00:10:25.129 "superblock": false, 00:10:25.129 "num_base_bdevs": 4, 00:10:25.129 "num_base_bdevs_discovered": 4, 00:10:25.129 "num_base_bdevs_operational": 4, 00:10:25.129 "base_bdevs_list": [ 00:10:25.129 { 00:10:25.129 "name": "BaseBdev1", 00:10:25.130 "uuid": "c85ace61-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.130 "is_configured": true, 00:10:25.130 "data_offset": 0, 00:10:25.130 "data_size": 65536 00:10:25.130 }, 00:10:25.130 { 00:10:25.130 "name": "BaseBdev2", 00:10:25.130 "uuid": "c97d5a23-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.130 "is_configured": true, 00:10:25.130 "data_offset": 0, 00:10:25.130 "data_size": 65536 00:10:25.130 }, 00:10:25.130 { 00:10:25.130 "name": "BaseBdev3", 00:10:25.130 "uuid": "ca2e5ecd-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.130 "is_configured": true, 00:10:25.130 "data_offset": 0, 00:10:25.130 "data_size": 65536 00:10:25.130 }, 00:10:25.130 { 00:10:25.130 "name": "BaseBdev4", 00:10:25.130 "uuid": "cae92659-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.130 "is_configured": true, 00:10:25.130 "data_offset": 0, 00:10:25.130 "data_size": 65536 00:10:25.130 } 00:10:25.130 ] 00:10:25.130 }' 00:10:25.130 19:11:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:25.130 19:11:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.388 19:11:02 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:25.645 [2024-02-14 19:11:02.812789] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.645 [2024-02-14 19:11:02.812814] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.645 [2024-02-14 19:11:02.812829] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.645 19:11:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:25.645 19:11:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:10:25.645 19:11:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:25.645 19:11:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:25.645 19:11:02 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.646 19:11:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.903 19:11:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:25.903 "name": "Existed_Raid", 00:10:25.903 "uuid": "cae92e3b-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.903 "strip_size_kb": 64, 00:10:25.903 "state": "offline", 00:10:25.903 "raid_level": "raid0", 00:10:25.903 "superblock": false, 00:10:25.903 "num_base_bdevs": 4, 00:10:25.903 "num_base_bdevs_discovered": 3, 00:10:25.903 "num_base_bdevs_operational": 3, 00:10:25.903 "base_bdevs_list": [ 00:10:25.903 { 00:10:25.903 "name": null, 00:10:25.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.903 "is_configured": false, 00:10:25.903 "data_offset": 0, 00:10:25.903 "data_size": 65536 00:10:25.903 }, 00:10:25.903 { 00:10:25.903 "name": "BaseBdev2", 00:10:25.903 "uuid": "c97d5a23-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.903 "is_configured": true, 00:10:25.903 "data_offset": 0, 00:10:25.903 "data_size": 65536 00:10:25.903 }, 00:10:25.903 { 00:10:25.904 "name": "BaseBdev3", 00:10:25.904 "uuid": "ca2e5ecd-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.904 "is_configured": true, 00:10:25.904 "data_offset": 0, 00:10:25.904 "data_size": 65536 00:10:25.904 }, 00:10:25.904 { 00:10:25.904 "name": "BaseBdev4", 00:10:25.904 "uuid": "cae92659-cb6c-11ee-af6b-4feeebbbadda", 00:10:25.904 "is_configured": true, 00:10:25.904 "data_offset": 0, 00:10:25.904 "data_size": 65536 00:10:25.904 } 00:10:25.904 ] 00:10:25.904 }' 00:10:25.904 19:11:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:25.904 19:11:03 -- common/autotest_common.sh@10 -- # set +x 00:10:26.162 19:11:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:26.162 19:11:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:26.162 19:11:03 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.162 19:11:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:26.420 19:11:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:26.420 19:11:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.420 19:11:03 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:26.678 [2024-02-14 19:11:03.978000] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:26.678 19:11:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:26.678 19:11:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:26.678 19:11:04 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.678 19:11:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:26.935 19:11:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:26.935 19:11:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:26.935 19:11:04 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:27.192 [2024-02-14 19:11:04.455336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:27.193 19:11:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:27.193 19:11:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:27.193 19:11:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:27.193 19:11:04 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.450 19:11:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:27.450 19:11:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.450 19:11:04 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:10:27.450 [2024-02-14 19:11:04.864638] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:27.450 [2024-02-14 19:11:04.864668] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b688a00 name Existed_Raid, state offline 00:10:27.708 19:11:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:27.708 19:11:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:27.708 19:11:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:27.708 19:11:04 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.967 19:11:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:27.967 19:11:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:27.967 19:11:05 -- bdev/bdev_raid.sh@287 -- # killprocess 52576 00:10:27.967 19:11:05 -- common/autotest_common.sh@924 -- # '[' -z 52576 ']' 00:10:27.967 19:11:05 -- common/autotest_common.sh@928 -- # kill -0 52576 00:10:27.967 19:11:05 -- common/autotest_common.sh@929 -- # uname 00:10:27.967 19:11:05 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:10:27.967 19:11:05 -- common/autotest_common.sh@932 -- # ps -c -o command 52576 00:10:27.967 19:11:05 -- common/autotest_common.sh@932 -- # tail -1 00:10:27.967 19:11:05 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:10:27.967 19:11:05 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:10:27.967 killing process with pid 52576 00:10:27.967 19:11:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 52576' 00:10:27.967 19:11:05 -- common/autotest_common.sh@943 -- # kill 52576 00:10:27.967 [2024-02-14 19:11:05.174577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.967 [2024-02-14 19:11:05.174651] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.967 19:11:05 -- common/autotest_common.sh@948 -- # wait 52576 00:10:28.225 19:11:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:28.225 00:10:28.226 real 0m10.410s 00:10:28.226 user 0m18.287s 00:10:28.226 sys 0m1.670s 00:10:28.226 19:11:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:28.226 ************************************ 00:10:28.226 END TEST raid_state_function_test 00:10:28.226 ************************************ 00:10:28.226 19:11:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:28.226 19:11:05 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:10:28.226 19:11:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:28.226 19:11:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 ************************************ 00:10:28.226 START TEST raid_state_function_test_sb 00:10:28.226 ************************************ 00:10:28.226 19:11:05 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 4 true 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@226 -- # raid_pid=52846 00:10:28.226 Process raid pid: 52846 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52846' 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52846 /var/tmp/spdk-raid.sock 00:10:28.226 19:11:05 -- common/autotest_common.sh@817 -- # '[' -z 52846 ']' 00:10:28.226 19:11:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:28.226 19:11:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:28.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:28.226 19:11:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:28.226 19:11:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:28.226 19:11:05 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:28.226 19:11:05 -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 [2024-02-14 19:11:05.501755] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:28.226 [2024-02-14 19:11:05.502052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:28.793 EAL: TSC is not safe to use in SMP mode 00:10:28.793 EAL: TSC is not invariant 00:10:28.793 [2024-02-14 19:11:05.960131] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.793 [2024-02-14 19:11:06.088677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.793 [2024-02-14 19:11:06.089341] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.793 [2024-02-14 19:11:06.089354] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.361 19:11:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:29.361 19:11:06 -- common/autotest_common.sh@850 -- # return 0 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:29.361 [2024-02-14 19:11:06.745891] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:29.361 [2024-02-14 19:11:06.745952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:29.361 [2024-02-14 19:11:06.745957] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:29.361 [2024-02-14 19:11:06.745966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:29.361 [2024-02-14 19:11:06.745969] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:29.361 [2024-02-14 19:11:06.745976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:29.361 [2024-02-14 19:11:06.745979] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:29.361 [2024-02-14 19:11:06.745986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.361 19:11:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.619 19:11:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:29.619 "name": "Existed_Raid", 00:10:29.619 "uuid": "cdf6910e-cb6c-11ee-af6b-4feeebbbadda", 00:10:29.619 "strip_size_kb": 64, 00:10:29.619 "state": "configuring", 00:10:29.619 "raid_level": "raid0", 00:10:29.619 "superblock": true, 00:10:29.619 "num_base_bdevs": 4, 00:10:29.619 "num_base_bdevs_discovered": 0, 00:10:29.619 "num_base_bdevs_operational": 4, 00:10:29.619 "base_bdevs_list": [ 00:10:29.619 { 00:10:29.619 "name": "BaseBdev1", 00:10:29.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.619 "is_configured": false, 00:10:29.619 "data_offset": 0, 00:10:29.619 "data_size": 0 00:10:29.619 }, 00:10:29.619 { 00:10:29.619 "name": "BaseBdev2", 00:10:29.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.619 "is_configured": false, 00:10:29.619 "data_offset": 0, 00:10:29.619 "data_size": 0 00:10:29.619 }, 00:10:29.619 { 00:10:29.619 "name": "BaseBdev3", 00:10:29.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.619 "is_configured": false, 00:10:29.619 "data_offset": 0, 00:10:29.619 "data_size": 0 00:10:29.619 }, 00:10:29.619 { 00:10:29.619 "name": "BaseBdev4", 00:10:29.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.619 "is_configured": false, 00:10:29.619 "data_offset": 0, 00:10:29.619 "data_size": 0 00:10:29.619 } 00:10:29.619 ] 00:10:29.619 }' 00:10:29.619 19:11:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:29.620 19:11:06 -- common/autotest_common.sh@10 -- # set +x 00:10:29.888 19:11:07 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:30.161 [2024-02-14 19:11:07.397875] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.161 [2024-02-14 19:11:07.397903] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be28500 name Existed_Raid, state configuring 00:10:30.161 19:11:07 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:30.419 [2024-02-14 19:11:07.637901] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.420 [2024-02-14 19:11:07.637952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.420 [2024-02-14 19:11:07.637956] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.420 [2024-02-14 19:11:07.637964] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.420 [2024-02-14 19:11:07.637968] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.420 [2024-02-14 19:11:07.637975] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.420 [2024-02-14 19:11:07.637978] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:30.420 [2024-02-14 19:11:07.637985] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:30.420 19:11:07 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.420 [2024-02-14 19:11:07.815208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.420 BaseBdev1 00:10:30.420 19:11:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:30.420 19:11:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:10:30.420 19:11:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:30.420 19:11:07 -- common/autotest_common.sh@887 -- # local i 00:10:30.420 19:11:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:30.420 19:11:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:30.420 19:11:07 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.679 19:11:08 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.937 [ 00:10:30.937 { 00:10:30.937 "name": "BaseBdev1", 00:10:30.937 "aliases": [ 00:10:30.937 "ce998827-cb6c-11ee-af6b-4feeebbbadda" 00:10:30.937 ], 00:10:30.937 "product_name": "Malloc disk", 00:10:30.937 "block_size": 512, 00:10:30.937 "num_blocks": 65536, 00:10:30.937 "uuid": "ce998827-cb6c-11ee-af6b-4feeebbbadda", 00:10:30.937 "assigned_rate_limits": { 00:10:30.937 "rw_ios_per_sec": 0, 00:10:30.937 "rw_mbytes_per_sec": 0, 00:10:30.937 "r_mbytes_per_sec": 0, 00:10:30.937 "w_mbytes_per_sec": 0 00:10:30.937 }, 00:10:30.937 "claimed": true, 00:10:30.937 "claim_type": "exclusive_write", 00:10:30.937 "zoned": false, 00:10:30.937 "supported_io_types": { 00:10:30.937 "read": true, 00:10:30.937 "write": true, 00:10:30.937 "unmap": true, 00:10:30.937 "write_zeroes": true, 00:10:30.937 "flush": true, 00:10:30.937 "reset": true, 00:10:30.937 "compare": false, 00:10:30.937 "compare_and_write": false, 00:10:30.937 "abort": true, 00:10:30.937 "nvme_admin": false, 00:10:30.937 "nvme_io": false 00:10:30.937 }, 00:10:30.937 "memory_domains": [ 00:10:30.937 { 00:10:30.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.937 "dma_device_type": 2 00:10:30.937 } 00:10:30.937 ], 00:10:30.937 "driver_specific": {} 00:10:30.937 } 00:10:30.937 ] 00:10:30.937 19:11:08 -- common/autotest_common.sh@893 -- # return 0 00:10:30.937 19:11:08 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.937 19:11:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:30.937 19:11:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:30.937 19:11:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:30.937 19:11:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.938 19:11:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.197 19:11:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:31.197 "name": "Existed_Raid", 00:10:31.197 "uuid": "ce7ead4b-cb6c-11ee-af6b-4feeebbbadda", 00:10:31.197 "strip_size_kb": 64, 00:10:31.197 "state": "configuring", 00:10:31.197 "raid_level": "raid0", 00:10:31.197 "superblock": true, 00:10:31.197 "num_base_bdevs": 4, 00:10:31.197 "num_base_bdevs_discovered": 1, 00:10:31.197 "num_base_bdevs_operational": 4, 00:10:31.197 "base_bdevs_list": [ 00:10:31.197 { 00:10:31.197 "name": "BaseBdev1", 00:10:31.197 "uuid": "ce998827-cb6c-11ee-af6b-4feeebbbadda", 00:10:31.197 "is_configured": true, 00:10:31.197 "data_offset": 2048, 00:10:31.197 "data_size": 63488 00:10:31.197 }, 00:10:31.197 { 00:10:31.197 "name": "BaseBdev2", 00:10:31.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.197 "is_configured": false, 00:10:31.197 "data_offset": 0, 00:10:31.197 "data_size": 0 00:10:31.197 }, 00:10:31.197 { 00:10:31.197 "name": "BaseBdev3", 00:10:31.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.197 "is_configured": false, 00:10:31.197 "data_offset": 0, 00:10:31.197 "data_size": 0 00:10:31.197 }, 00:10:31.197 { 00:10:31.197 "name": "BaseBdev4", 00:10:31.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.197 "is_configured": false, 00:10:31.197 "data_offset": 0, 00:10:31.197 "data_size": 0 00:10:31.197 } 00:10:31.197 ] 00:10:31.197 }' 00:10:31.197 19:11:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:31.197 19:11:08 -- common/autotest_common.sh@10 -- # set +x 00:10:31.455 19:11:08 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:31.714 [2024-02-14 19:11:08.993943] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.714 [2024-02-14 19:11:08.993983] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be28500 name Existed_Raid, state configuring 00:10:31.714 19:11:09 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:10:31.714 19:11:09 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:31.973 19:11:09 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.231 BaseBdev1 00:10:32.231 19:11:09 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:10:32.231 19:11:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:10:32.231 19:11:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:32.231 19:11:09 -- common/autotest_common.sh@887 -- # local i 00:10:32.231 19:11:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:32.231 19:11:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:32.231 19:11:09 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:32.489 19:11:09 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.747 [ 00:10:32.747 { 00:10:32.747 "name": "BaseBdev1", 00:10:32.747 "aliases": [ 00:10:32.747 "cf9796cf-cb6c-11ee-af6b-4feeebbbadda" 00:10:32.747 ], 00:10:32.747 "product_name": "Malloc disk", 00:10:32.747 "block_size": 512, 00:10:32.747 "num_blocks": 65536, 00:10:32.747 "uuid": "cf9796cf-cb6c-11ee-af6b-4feeebbbadda", 00:10:32.747 "assigned_rate_limits": { 00:10:32.747 "rw_ios_per_sec": 0, 00:10:32.747 "rw_mbytes_per_sec": 0, 00:10:32.748 "r_mbytes_per_sec": 0, 00:10:32.748 "w_mbytes_per_sec": 0 00:10:32.748 }, 00:10:32.748 "claimed": false, 00:10:32.748 "zoned": false, 00:10:32.748 "supported_io_types": { 00:10:32.748 "read": true, 00:10:32.748 "write": true, 00:10:32.748 "unmap": true, 00:10:32.748 "write_zeroes": true, 00:10:32.748 "flush": true, 00:10:32.748 "reset": true, 00:10:32.748 "compare": false, 00:10:32.748 "compare_and_write": false, 00:10:32.748 "abort": true, 00:10:32.748 "nvme_admin": false, 00:10:32.748 "nvme_io": false 00:10:32.748 }, 00:10:32.748 "memory_domains": [ 00:10:32.748 { 00:10:32.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.748 "dma_device_type": 2 00:10:32.748 } 00:10:32.748 ], 00:10:32.748 "driver_specific": {} 00:10:32.748 } 00:10:32.748 ] 00:10:32.748 19:11:09 -- common/autotest_common.sh@893 -- # return 0 00:10:32.748 19:11:09 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:33.006 [2024-02-14 19:11:10.202966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.006 [2024-02-14 19:11:10.203727] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.006 [2024-02-14 19:11:10.203775] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.006 [2024-02-14 19:11:10.203781] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.006 [2024-02-14 19:11:10.203789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.006 [2024-02-14 19:11:10.203793] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.006 [2024-02-14 19:11:10.203800] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.006 19:11:10 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:33.006 19:11:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:33.006 19:11:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.006 19:11:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:33.006 19:11:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:33.007 "name": "Existed_Raid", 00:10:33.007 "uuid": "d00612da-cb6c-11ee-af6b-4feeebbbadda", 00:10:33.007 "strip_size_kb": 64, 00:10:33.007 "state": "configuring", 00:10:33.007 "raid_level": "raid0", 00:10:33.007 "superblock": true, 00:10:33.007 "num_base_bdevs": 4, 00:10:33.007 "num_base_bdevs_discovered": 1, 00:10:33.007 "num_base_bdevs_operational": 4, 00:10:33.007 "base_bdevs_list": [ 00:10:33.007 { 00:10:33.007 "name": "BaseBdev1", 00:10:33.007 "uuid": "cf9796cf-cb6c-11ee-af6b-4feeebbbadda", 00:10:33.007 "is_configured": true, 00:10:33.007 "data_offset": 2048, 00:10:33.007 "data_size": 63488 00:10:33.007 }, 00:10:33.007 { 00:10:33.007 "name": "BaseBdev2", 00:10:33.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.007 "is_configured": false, 00:10:33.007 "data_offset": 0, 00:10:33.007 "data_size": 0 00:10:33.007 }, 00:10:33.007 { 00:10:33.007 "name": "BaseBdev3", 00:10:33.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.007 "is_configured": false, 00:10:33.007 "data_offset": 0, 00:10:33.007 "data_size": 0 00:10:33.007 }, 00:10:33.007 { 00:10:33.007 "name": "BaseBdev4", 00:10:33.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.007 "is_configured": false, 00:10:33.007 "data_offset": 0, 00:10:33.007 "data_size": 0 00:10:33.007 } 00:10:33.007 ] 00:10:33.007 }' 00:10:33.007 19:11:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:33.007 19:11:10 -- common/autotest_common.sh@10 -- # set +x 00:10:33.575 19:11:10 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.575 [2024-02-14 19:11:10.959143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.575 BaseBdev2 00:10:33.575 19:11:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:33.575 19:11:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:10:33.575 19:11:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:33.575 19:11:10 -- common/autotest_common.sh@887 -- # local i 00:10:33.575 19:11:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:33.575 19:11:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:33.575 19:11:10 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:33.836 19:11:11 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.094 [ 00:10:34.094 { 00:10:34.094 "name": "BaseBdev2", 00:10:34.094 "aliases": [ 00:10:34.094 "d0796fde-cb6c-11ee-af6b-4feeebbbadda" 00:10:34.094 ], 00:10:34.094 "product_name": "Malloc disk", 00:10:34.094 "block_size": 512, 00:10:34.094 "num_blocks": 65536, 00:10:34.094 "uuid": "d0796fde-cb6c-11ee-af6b-4feeebbbadda", 00:10:34.094 "assigned_rate_limits": { 00:10:34.094 "rw_ios_per_sec": 0, 00:10:34.094 "rw_mbytes_per_sec": 0, 00:10:34.094 "r_mbytes_per_sec": 0, 00:10:34.094 "w_mbytes_per_sec": 0 00:10:34.094 }, 00:10:34.094 "claimed": true, 00:10:34.094 "claim_type": "exclusive_write", 00:10:34.094 "zoned": false, 00:10:34.094 "supported_io_types": { 00:10:34.094 "read": true, 00:10:34.094 "write": true, 00:10:34.094 "unmap": true, 00:10:34.094 "write_zeroes": true, 00:10:34.094 "flush": true, 00:10:34.094 "reset": true, 00:10:34.094 "compare": false, 00:10:34.094 "compare_and_write": false, 00:10:34.094 "abort": true, 00:10:34.094 "nvme_admin": false, 00:10:34.094 "nvme_io": false 00:10:34.094 }, 00:10:34.094 "memory_domains": [ 00:10:34.094 { 00:10:34.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.094 "dma_device_type": 2 00:10:34.094 } 00:10:34.094 ], 00:10:34.094 "driver_specific": {} 00:10:34.094 } 00:10:34.094 ] 00:10:34.094 19:11:11 -- common/autotest_common.sh@893 -- # return 0 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.094 19:11:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.353 19:11:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:34.353 "name": "Existed_Raid", 00:10:34.353 "uuid": "d00612da-cb6c-11ee-af6b-4feeebbbadda", 00:10:34.353 "strip_size_kb": 64, 00:10:34.353 "state": "configuring", 00:10:34.353 "raid_level": "raid0", 00:10:34.353 "superblock": true, 00:10:34.353 "num_base_bdevs": 4, 00:10:34.353 "num_base_bdevs_discovered": 2, 00:10:34.353 "num_base_bdevs_operational": 4, 00:10:34.353 "base_bdevs_list": [ 00:10:34.353 { 00:10:34.353 "name": "BaseBdev1", 00:10:34.353 "uuid": "cf9796cf-cb6c-11ee-af6b-4feeebbbadda", 00:10:34.353 "is_configured": true, 00:10:34.353 "data_offset": 2048, 00:10:34.353 "data_size": 63488 00:10:34.353 }, 00:10:34.353 { 00:10:34.353 "name": "BaseBdev2", 00:10:34.353 "uuid": "d0796fde-cb6c-11ee-af6b-4feeebbbadda", 00:10:34.353 "is_configured": true, 00:10:34.353 "data_offset": 2048, 00:10:34.353 "data_size": 63488 00:10:34.353 }, 00:10:34.353 { 00:10:34.353 "name": "BaseBdev3", 00:10:34.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.353 "is_configured": false, 00:10:34.353 "data_offset": 0, 00:10:34.353 "data_size": 0 00:10:34.353 }, 00:10:34.353 { 00:10:34.353 "name": "BaseBdev4", 00:10:34.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.353 "is_configured": false, 00:10:34.353 "data_offset": 0, 00:10:34.353 "data_size": 0 00:10:34.353 } 00:10:34.353 ] 00:10:34.353 }' 00:10:34.353 19:11:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:34.353 19:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:34.920 19:11:12 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.920 [2024-02-14 19:11:12.267252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.920 BaseBdev3 00:10:34.920 19:11:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:34.920 19:11:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:10:34.920 19:11:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:34.920 19:11:12 -- common/autotest_common.sh@887 -- # local i 00:10:34.920 19:11:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:34.920 19:11:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:34.920 19:11:12 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.179 19:11:12 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.438 [ 00:10:35.438 { 00:10:35.438 "name": "BaseBdev3", 00:10:35.438 "aliases": [ 00:10:35.438 "d1410a6c-cb6c-11ee-af6b-4feeebbbadda" 00:10:35.438 ], 00:10:35.438 "product_name": "Malloc disk", 00:10:35.438 "block_size": 512, 00:10:35.438 "num_blocks": 65536, 00:10:35.438 "uuid": "d1410a6c-cb6c-11ee-af6b-4feeebbbadda", 00:10:35.438 "assigned_rate_limits": { 00:10:35.438 "rw_ios_per_sec": 0, 00:10:35.438 "rw_mbytes_per_sec": 0, 00:10:35.438 "r_mbytes_per_sec": 0, 00:10:35.438 "w_mbytes_per_sec": 0 00:10:35.438 }, 00:10:35.438 "claimed": true, 00:10:35.438 "claim_type": "exclusive_write", 00:10:35.438 "zoned": false, 00:10:35.438 "supported_io_types": { 00:10:35.438 "read": true, 00:10:35.438 "write": true, 00:10:35.438 "unmap": true, 00:10:35.438 "write_zeroes": true, 00:10:35.438 "flush": true, 00:10:35.438 "reset": true, 00:10:35.438 "compare": false, 00:10:35.438 "compare_and_write": false, 00:10:35.438 "abort": true, 00:10:35.438 "nvme_admin": false, 00:10:35.438 "nvme_io": false 00:10:35.438 }, 00:10:35.438 "memory_domains": [ 00:10:35.438 { 00:10:35.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.438 "dma_device_type": 2 00:10:35.438 } 00:10:35.438 ], 00:10:35.438 "driver_specific": {} 00:10:35.438 } 00:10:35.438 ] 00:10:35.438 19:11:12 -- common/autotest_common.sh@893 -- # return 0 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:35.438 "name": "Existed_Raid", 00:10:35.438 "uuid": "d00612da-cb6c-11ee-af6b-4feeebbbadda", 00:10:35.438 "strip_size_kb": 64, 00:10:35.438 "state": "configuring", 00:10:35.438 "raid_level": "raid0", 00:10:35.438 "superblock": true, 00:10:35.438 "num_base_bdevs": 4, 00:10:35.438 "num_base_bdevs_discovered": 3, 00:10:35.438 "num_base_bdevs_operational": 4, 00:10:35.438 "base_bdevs_list": [ 00:10:35.438 { 00:10:35.438 "name": "BaseBdev1", 00:10:35.438 "uuid": "cf9796cf-cb6c-11ee-af6b-4feeebbbadda", 00:10:35.438 "is_configured": true, 00:10:35.438 "data_offset": 2048, 00:10:35.438 "data_size": 63488 00:10:35.438 }, 00:10:35.438 { 00:10:35.438 "name": "BaseBdev2", 00:10:35.438 "uuid": "d0796fde-cb6c-11ee-af6b-4feeebbbadda", 00:10:35.438 "is_configured": true, 00:10:35.438 "data_offset": 2048, 00:10:35.438 "data_size": 63488 00:10:35.438 }, 00:10:35.438 { 00:10:35.438 "name": "BaseBdev3", 00:10:35.438 "uuid": "d1410a6c-cb6c-11ee-af6b-4feeebbbadda", 00:10:35.438 "is_configured": true, 00:10:35.438 "data_offset": 2048, 00:10:35.438 "data_size": 63488 00:10:35.438 }, 00:10:35.438 { 00:10:35.438 "name": "BaseBdev4", 00:10:35.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.438 "is_configured": false, 00:10:35.438 "data_offset": 0, 00:10:35.438 "data_size": 0 00:10:35.438 } 00:10:35.438 ] 00:10:35.438 }' 00:10:35.438 19:11:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:35.438 19:11:12 -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 19:11:13 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.005 [2024-02-14 19:11:13.307301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.005 [2024-02-14 19:11:13.307365] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82be28a00 00:10:36.005 [2024-02-14 19:11:13.307370] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.005 [2024-02-14 19:11:13.307389] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82be8bec0 00:10:36.005 [2024-02-14 19:11:13.307429] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82be28a00 00:10:36.005 [2024-02-14 19:11:13.307433] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82be28a00 00:10:36.005 [2024-02-14 19:11:13.307470] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.005 BaseBdev4 00:10:36.005 19:11:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:10:36.005 19:11:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:10:36.005 19:11:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:36.005 19:11:13 -- common/autotest_common.sh@887 -- # local i 00:10:36.005 19:11:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:36.005 19:11:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:36.005 19:11:13 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:36.264 19:11:13 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.522 [ 00:10:36.522 { 00:10:36.522 "name": "BaseBdev4", 00:10:36.522 "aliases": [ 00:10:36.522 "d1dfbea6-cb6c-11ee-af6b-4feeebbbadda" 00:10:36.522 ], 00:10:36.522 "product_name": "Malloc disk", 00:10:36.522 "block_size": 512, 00:10:36.522 "num_blocks": 65536, 00:10:36.522 "uuid": "d1dfbea6-cb6c-11ee-af6b-4feeebbbadda", 00:10:36.522 "assigned_rate_limits": { 00:10:36.522 "rw_ios_per_sec": 0, 00:10:36.522 "rw_mbytes_per_sec": 0, 00:10:36.522 "r_mbytes_per_sec": 0, 00:10:36.522 "w_mbytes_per_sec": 0 00:10:36.522 }, 00:10:36.522 "claimed": true, 00:10:36.522 "claim_type": "exclusive_write", 00:10:36.522 "zoned": false, 00:10:36.522 "supported_io_types": { 00:10:36.522 "read": true, 00:10:36.522 "write": true, 00:10:36.522 "unmap": true, 00:10:36.522 "write_zeroes": true, 00:10:36.522 "flush": true, 00:10:36.522 "reset": true, 00:10:36.522 "compare": false, 00:10:36.522 "compare_and_write": false, 00:10:36.522 "abort": true, 00:10:36.522 "nvme_admin": false, 00:10:36.522 "nvme_io": false 00:10:36.522 }, 00:10:36.522 "memory_domains": [ 00:10:36.522 { 00:10:36.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.523 "dma_device_type": 2 00:10:36.523 } 00:10:36.523 ], 00:10:36.523 "driver_specific": {} 00:10:36.523 } 00:10:36.523 ] 00:10:36.523 19:11:13 -- common/autotest_common.sh@893 -- # return 0 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.523 19:11:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.781 19:11:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:36.781 "name": "Existed_Raid", 00:10:36.781 "uuid": "d00612da-cb6c-11ee-af6b-4feeebbbadda", 00:10:36.781 "strip_size_kb": 64, 00:10:36.781 "state": "online", 00:10:36.781 "raid_level": "raid0", 00:10:36.781 "superblock": true, 00:10:36.781 "num_base_bdevs": 4, 00:10:36.781 "num_base_bdevs_discovered": 4, 00:10:36.781 "num_base_bdevs_operational": 4, 00:10:36.781 "base_bdevs_list": [ 00:10:36.781 { 00:10:36.781 "name": "BaseBdev1", 00:10:36.781 "uuid": "cf9796cf-cb6c-11ee-af6b-4feeebbbadda", 00:10:36.781 "is_configured": true, 00:10:36.781 "data_offset": 2048, 00:10:36.781 "data_size": 63488 00:10:36.781 }, 00:10:36.781 { 00:10:36.781 "name": "BaseBdev2", 00:10:36.781 "uuid": "d0796fde-cb6c-11ee-af6b-4feeebbbadda", 00:10:36.781 "is_configured": true, 00:10:36.781 "data_offset": 2048, 00:10:36.781 "data_size": 63488 00:10:36.781 }, 00:10:36.781 { 00:10:36.781 "name": "BaseBdev3", 00:10:36.781 "uuid": "d1410a6c-cb6c-11ee-af6b-4feeebbbadda", 00:10:36.781 "is_configured": true, 00:10:36.781 "data_offset": 2048, 00:10:36.781 "data_size": 63488 00:10:36.781 }, 00:10:36.781 { 00:10:36.781 "name": "BaseBdev4", 00:10:36.781 "uuid": "d1dfbea6-cb6c-11ee-af6b-4feeebbbadda", 00:10:36.781 "is_configured": true, 00:10:36.781 "data_offset": 2048, 00:10:36.781 "data_size": 63488 00:10:36.781 } 00:10:36.781 ] 00:10:36.781 }' 00:10:36.781 19:11:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:36.781 19:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:37.040 19:11:14 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:37.363 [2024-02-14 19:11:14.503283] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.363 [2024-02-14 19:11:14.503306] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.363 [2024-02-14 19:11:14.503317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.363 19:11:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.627 19:11:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:37.627 "name": "Existed_Raid", 00:10:37.627 "uuid": "d00612da-cb6c-11ee-af6b-4feeebbbadda", 00:10:37.627 "strip_size_kb": 64, 00:10:37.627 "state": "offline", 00:10:37.627 "raid_level": "raid0", 00:10:37.627 "superblock": true, 00:10:37.627 "num_base_bdevs": 4, 00:10:37.627 "num_base_bdevs_discovered": 3, 00:10:37.627 "num_base_bdevs_operational": 3, 00:10:37.627 "base_bdevs_list": [ 00:10:37.627 { 00:10:37.628 "name": null, 00:10:37.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.628 "is_configured": false, 00:10:37.628 "data_offset": 2048, 00:10:37.628 "data_size": 63488 00:10:37.628 }, 00:10:37.628 { 00:10:37.628 "name": "BaseBdev2", 00:10:37.628 "uuid": "d0796fde-cb6c-11ee-af6b-4feeebbbadda", 00:10:37.628 "is_configured": true, 00:10:37.628 "data_offset": 2048, 00:10:37.628 "data_size": 63488 00:10:37.628 }, 00:10:37.628 { 00:10:37.628 "name": "BaseBdev3", 00:10:37.628 "uuid": "d1410a6c-cb6c-11ee-af6b-4feeebbbadda", 00:10:37.628 "is_configured": true, 00:10:37.628 "data_offset": 2048, 00:10:37.628 "data_size": 63488 00:10:37.628 }, 00:10:37.628 { 00:10:37.628 "name": "BaseBdev4", 00:10:37.628 "uuid": "d1dfbea6-cb6c-11ee-af6b-4feeebbbadda", 00:10:37.628 "is_configured": true, 00:10:37.628 "data_offset": 2048, 00:10:37.628 "data_size": 63488 00:10:37.628 } 00:10:37.628 ] 00:10:37.628 }' 00:10:37.628 19:11:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:37.628 19:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:37.886 19:11:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:37.886 19:11:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:37.886 19:11:15 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.886 19:11:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:38.144 19:11:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:38.144 19:11:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.144 19:11:15 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:38.403 [2024-02-14 19:11:15.584673] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.403 19:11:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:38.403 19:11:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:38.403 19:11:15 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.403 19:11:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:38.662 19:11:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:38.662 19:11:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.662 19:11:15 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:38.662 [2024-02-14 19:11:16.033984] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.662 19:11:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:38.662 19:11:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:38.662 19:11:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:38.662 19:11:16 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.920 19:11:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:38.920 19:11:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.920 19:11:16 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:10:39.179 [2024-02-14 19:11:16.559234] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:39.179 [2024-02-14 19:11:16.559253] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82be28a00 name Existed_Raid, state offline 00:10:39.179 19:11:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:39.179 19:11:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:39.179 19:11:16 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.179 19:11:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.438 19:11:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:39.438 19:11:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:39.438 19:11:16 -- bdev/bdev_raid.sh@287 -- # killprocess 52846 00:10:39.438 19:11:16 -- common/autotest_common.sh@924 -- # '[' -z 52846 ']' 00:10:39.438 19:11:16 -- common/autotest_common.sh@928 -- # kill -0 52846 00:10:39.438 19:11:16 -- common/autotest_common.sh@929 -- # uname 00:10:39.697 19:11:16 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:10:39.697 19:11:16 -- common/autotest_common.sh@932 -- # ps -c -o command 52846 00:10:39.697 19:11:16 -- common/autotest_common.sh@932 -- # tail -1 00:10:39.697 19:11:16 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:10:39.697 19:11:16 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:10:39.697 killing process with pid 52846 00:10:39.697 19:11:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 52846' 00:10:39.697 19:11:16 -- common/autotest_common.sh@943 -- # kill 52846 00:10:39.697 [2024-02-14 19:11:16.865807] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.697 19:11:16 -- common/autotest_common.sh@948 -- # wait 52846 00:10:39.697 [2024-02-14 19:11:16.865854] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.697 19:11:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:39.697 00:10:39.697 real 0m11.612s 00:10:39.697 user 0m20.527s 00:10:39.697 sys 0m1.803s 00:10:39.697 19:11:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:39.697 19:11:17 -- common/autotest_common.sh@10 -- # set +x 00:10:39.697 ************************************ 00:10:39.697 END TEST raid_state_function_test_sb 00:10:39.697 ************************************ 00:10:39.956 19:11:17 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:39.956 19:11:17 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:10:39.956 19:11:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:39.956 19:11:17 -- common/autotest_common.sh@10 -- # set +x 00:10:39.956 ************************************ 00:10:39.956 START TEST raid_superblock_test 00:10:39.956 ************************************ 00:10:39.956 19:11:17 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid0 4 00:10:39.956 19:11:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=53119 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53119 /var/tmp/spdk-raid.sock 00:10:39.957 19:11:17 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:39.957 19:11:17 -- common/autotest_common.sh@817 -- # '[' -z 53119 ']' 00:10:39.957 19:11:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:39.957 19:11:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:39.957 19:11:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:39.957 19:11:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:39.957 19:11:17 -- common/autotest_common.sh@10 -- # set +x 00:10:39.957 [2024-02-14 19:11:17.155313] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:39.957 [2024-02-14 19:11:17.155492] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:40.524 EAL: TSC is not safe to use in SMP mode 00:10:40.524 EAL: TSC is not invariant 00:10:40.524 [2024-02-14 19:11:17.929602] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.783 [2024-02-14 19:11:18.047978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.783 [2024-02-14 19:11:18.048510] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.783 [2024-02-14 19:11:18.048515] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.043 19:11:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:41.043 19:11:18 -- common/autotest_common.sh@850 -- # return 0 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:41.043 malloc1 00:10:41.043 19:11:18 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:41.301 [2024-02-14 19:11:18.664131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:41.301 [2024-02-14 19:11:18.664215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.301 [2024-02-14 19:11:18.664900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b8780 00:10:41.301 [2024-02-14 19:11:18.664929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.301 [2024-02-14 19:11:18.665995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.301 [2024-02-14 19:11:18.666024] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:41.301 pt1 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.301 19:11:18 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:41.559 malloc2 00:10:41.559 19:11:18 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:41.818 [2024-02-14 19:11:19.112174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:41.818 [2024-02-14 19:11:19.112237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:41.818 [2024-02-14 19:11:19.112272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b8c80 00:10:41.818 [2024-02-14 19:11:19.112280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:41.818 [2024-02-14 19:11:19.113087] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:41.818 [2024-02-14 19:11:19.113152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:41.818 pt2 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:41.818 19:11:19 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:42.078 malloc3 00:10:42.078 19:11:19 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:42.336 [2024-02-14 19:11:19.608216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:42.336 [2024-02-14 19:11:19.608293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.336 [2024-02-14 19:11:19.608331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b9180 00:10:42.336 [2024-02-14 19:11:19.608340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.336 [2024-02-14 19:11:19.609169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.336 [2024-02-14 19:11:19.609192] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:42.336 pt3 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.336 19:11:19 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:10:42.595 malloc4 00:10:42.595 19:11:19 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:42.853 [2024-02-14 19:11:20.092251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:42.853 [2024-02-14 19:11:20.092352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.854 [2024-02-14 19:11:20.092410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b9680 00:10:42.854 [2024-02-14 19:11:20.092430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.854 [2024-02-14 19:11:20.093519] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.854 [2024-02-14 19:11:20.093572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:42.854 pt4 00:10:42.854 19:11:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:42.854 19:11:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:42.854 19:11:20 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:10:43.113 [2024-02-14 19:11:20.356315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.113 [2024-02-14 19:11:20.357066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.113 [2024-02-14 19:11:20.357092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:43.113 [2024-02-14 19:11:20.357103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:43.113 [2024-02-14 19:11:20.357163] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d2b9900 00:10:43.113 [2024-02-14 19:11:20.357169] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.113 [2024-02-14 19:11:20.357208] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d31be20 00:10:43.113 [2024-02-14 19:11:20.357290] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d2b9900 00:10:43.113 [2024-02-14 19:11:20.357294] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d2b9900 00:10:43.113 [2024-02-14 19:11:20.357322] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.113 19:11:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.372 19:11:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:43.372 "name": "raid_bdev1", 00:10:43.372 "uuid": "d61359fe-cb6c-11ee-af6b-4feeebbbadda", 00:10:43.372 "strip_size_kb": 64, 00:10:43.372 "state": "online", 00:10:43.372 "raid_level": "raid0", 00:10:43.372 "superblock": true, 00:10:43.372 "num_base_bdevs": 4, 00:10:43.372 "num_base_bdevs_discovered": 4, 00:10:43.372 "num_base_bdevs_operational": 4, 00:10:43.372 "base_bdevs_list": [ 00:10:43.372 { 00:10:43.372 "name": "pt1", 00:10:43.372 "uuid": "a201d5a3-7f95-f35b-8d96-4856203f5e09", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 }, 00:10:43.372 { 00:10:43.372 "name": "pt2", 00:10:43.372 "uuid": "21f8a21b-4670-585f-b138-2e3e0bd120d7", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 }, 00:10:43.372 { 00:10:43.372 "name": "pt3", 00:10:43.372 "uuid": "d1f5cc3f-0034-3a50-8ac7-bc54fac6712f", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 }, 00:10:43.372 { 00:10:43.372 "name": "pt4", 00:10:43.372 "uuid": "d7612863-d412-4451-a477-b569db81df80", 00:10:43.372 "is_configured": true, 00:10:43.372 "data_offset": 2048, 00:10:43.372 "data_size": 63488 00:10:43.372 } 00:10:43.372 ] 00:10:43.372 }' 00:10:43.372 19:11:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:43.372 19:11:20 -- common/autotest_common.sh@10 -- # set +x 00:10:43.630 19:11:20 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:43.630 19:11:20 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:43.630 [2024-02-14 19:11:21.016366] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.630 19:11:21 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d61359fe-cb6c-11ee-af6b-4feeebbbadda 00:10:43.630 19:11:21 -- bdev/bdev_raid.sh@380 -- # '[' -z d61359fe-cb6c-11ee-af6b-4feeebbbadda ']' 00:10:43.630 19:11:21 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:43.889 [2024-02-14 19:11:21.272309] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.889 [2024-02-14 19:11:21.272328] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.889 [2024-02-14 19:11:21.272349] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.889 [2024-02-14 19:11:21.272368] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.889 [2024-02-14 19:11:21.272372] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d2b9900 name raid_bdev1, state offline 00:10:43.889 19:11:21 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:43.889 19:11:21 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:44.149 19:11:21 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:44.149 19:11:21 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:44.149 19:11:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.149 19:11:21 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:44.408 19:11:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.408 19:11:21 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:44.666 19:11:21 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.666 19:11:21 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:44.925 19:11:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.925 19:11:22 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:10:45.202 19:11:22 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:45.202 19:11:22 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.202 19:11:22 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:45.202 19:11:22 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:10:45.202 19:11:22 -- common/autotest_common.sh@638 -- # local es=0 00:10:45.202 19:11:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:10:45.202 19:11:22 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.202 19:11:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:45.202 19:11:22 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.202 19:11:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:45.202 19:11:22 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.202 19:11:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:45.202 19:11:22 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.202 19:11:22 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:45.202 19:11:22 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:10:45.469 [2024-02-14 19:11:22.776436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:45.469 [2024-02-14 19:11:22.777198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:45.469 [2024-02-14 19:11:22.777217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:45.469 [2024-02-14 19:11:22.777227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:45.469 [2024-02-14 19:11:22.777240] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:45.469 [2024-02-14 19:11:22.777280] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:45.469 [2024-02-14 19:11:22.777290] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:10:45.469 [2024-02-14 19:11:22.777300] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:10:45.469 [2024-02-14 19:11:22.777308] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.469 [2024-02-14 19:11:22.777313] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d2b9680 name raid_bdev1, state configuring 00:10:45.469 request: 00:10:45.469 { 00:10:45.469 "name": "raid_bdev1", 00:10:45.469 "raid_level": "raid0", 00:10:45.469 "base_bdevs": [ 00:10:45.469 "malloc1", 00:10:45.469 "malloc2", 00:10:45.469 "malloc3", 00:10:45.469 "malloc4" 00:10:45.469 ], 00:10:45.469 "superblock": false, 00:10:45.469 "strip_size_kb": 64, 00:10:45.469 "method": "bdev_raid_create", 00:10:45.469 "req_id": 1 00:10:45.469 } 00:10:45.470 Got JSON-RPC error response 00:10:45.470 response: 00:10:45.470 { 00:10:45.470 "code": -17, 00:10:45.470 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:45.470 } 00:10:45.470 19:11:22 -- common/autotest_common.sh@641 -- # es=1 00:10:45.470 19:11:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:45.470 19:11:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:45.470 19:11:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:45.470 19:11:22 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.470 19:11:22 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:45.728 19:11:23 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:45.728 19:11:23 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:45.728 19:11:23 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:45.986 [2024-02-14 19:11:23.240471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:45.986 [2024-02-14 19:11:23.240506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.986 [2024-02-14 19:11:23.240539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b9180 00:10:45.986 [2024-02-14 19:11:23.240547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.986 [2024-02-14 19:11:23.241345] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.986 [2024-02-14 19:11:23.241373] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:45.986 [2024-02-14 19:11:23.241409] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:45.986 [2024-02-14 19:11:23.241433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.986 pt1 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.986 19:11:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.245 19:11:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:46.245 "name": "raid_bdev1", 00:10:46.245 "uuid": "d61359fe-cb6c-11ee-af6b-4feeebbbadda", 00:10:46.245 "strip_size_kb": 64, 00:10:46.245 "state": "configuring", 00:10:46.245 "raid_level": "raid0", 00:10:46.245 "superblock": true, 00:10:46.245 "num_base_bdevs": 4, 00:10:46.245 "num_base_bdevs_discovered": 1, 00:10:46.245 "num_base_bdevs_operational": 4, 00:10:46.245 "base_bdevs_list": [ 00:10:46.245 { 00:10:46.245 "name": "pt1", 00:10:46.245 "uuid": "a201d5a3-7f95-f35b-8d96-4856203f5e09", 00:10:46.245 "is_configured": true, 00:10:46.245 "data_offset": 2048, 00:10:46.245 "data_size": 63488 00:10:46.245 }, 00:10:46.245 { 00:10:46.245 "name": null, 00:10:46.245 "uuid": "21f8a21b-4670-585f-b138-2e3e0bd120d7", 00:10:46.245 "is_configured": false, 00:10:46.245 "data_offset": 2048, 00:10:46.245 "data_size": 63488 00:10:46.245 }, 00:10:46.245 { 00:10:46.245 "name": null, 00:10:46.245 "uuid": "d1f5cc3f-0034-3a50-8ac7-bc54fac6712f", 00:10:46.245 "is_configured": false, 00:10:46.245 "data_offset": 2048, 00:10:46.245 "data_size": 63488 00:10:46.245 }, 00:10:46.245 { 00:10:46.245 "name": null, 00:10:46.245 "uuid": "d7612863-d412-4451-a477-b569db81df80", 00:10:46.245 "is_configured": false, 00:10:46.245 "data_offset": 2048, 00:10:46.245 "data_size": 63488 00:10:46.245 } 00:10:46.245 ] 00:10:46.245 }' 00:10:46.245 19:11:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:46.245 19:11:23 -- common/autotest_common.sh@10 -- # set +x 00:10:46.504 19:11:23 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:10:46.504 19:11:23 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.762 [2024-02-14 19:11:24.012514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.762 [2024-02-14 19:11:24.012552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.762 [2024-02-14 19:11:24.012585] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b8780 00:10:46.762 [2024-02-14 19:11:24.012593] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.762 [2024-02-14 19:11:24.012710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.762 [2024-02-14 19:11:24.012719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.762 [2024-02-14 19:11:24.012736] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:46.762 [2024-02-14 19:11:24.012744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.762 pt2 00:10:46.762 19:11:24 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:47.019 [2024-02-14 19:11:24.188540] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:47.019 "name": "raid_bdev1", 00:10:47.019 "uuid": "d61359fe-cb6c-11ee-af6b-4feeebbbadda", 00:10:47.019 "strip_size_kb": 64, 00:10:47.019 "state": "configuring", 00:10:47.019 "raid_level": "raid0", 00:10:47.019 "superblock": true, 00:10:47.019 "num_base_bdevs": 4, 00:10:47.019 "num_base_bdevs_discovered": 1, 00:10:47.019 "num_base_bdevs_operational": 4, 00:10:47.019 "base_bdevs_list": [ 00:10:47.019 { 00:10:47.019 "name": "pt1", 00:10:47.019 "uuid": "a201d5a3-7f95-f35b-8d96-4856203f5e09", 00:10:47.019 "is_configured": true, 00:10:47.019 "data_offset": 2048, 00:10:47.019 "data_size": 63488 00:10:47.019 }, 00:10:47.019 { 00:10:47.019 "name": null, 00:10:47.019 "uuid": "21f8a21b-4670-585f-b138-2e3e0bd120d7", 00:10:47.019 "is_configured": false, 00:10:47.019 "data_offset": 2048, 00:10:47.019 "data_size": 63488 00:10:47.019 }, 00:10:47.019 { 00:10:47.019 "name": null, 00:10:47.019 "uuid": "d1f5cc3f-0034-3a50-8ac7-bc54fac6712f", 00:10:47.019 "is_configured": false, 00:10:47.019 "data_offset": 2048, 00:10:47.019 "data_size": 63488 00:10:47.019 }, 00:10:47.019 { 00:10:47.019 "name": null, 00:10:47.019 "uuid": "d7612863-d412-4451-a477-b569db81df80", 00:10:47.019 "is_configured": false, 00:10:47.019 "data_offset": 2048, 00:10:47.019 "data_size": 63488 00:10:47.019 } 00:10:47.019 ] 00:10:47.019 }' 00:10:47.019 19:11:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:47.019 19:11:24 -- common/autotest_common.sh@10 -- # set +x 00:10:47.278 19:11:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:47.278 19:11:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:47.278 19:11:24 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:47.536 [2024-02-14 19:11:24.916628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:47.536 [2024-02-14 19:11:24.916666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.536 [2024-02-14 19:11:24.916693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b8780 00:10:47.536 [2024-02-14 19:11:24.916701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.536 [2024-02-14 19:11:24.916813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.536 [2024-02-14 19:11:24.916827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:47.536 [2024-02-14 19:11:24.916843] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:47.536 [2024-02-14 19:11:24.916850] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.536 pt2 00:10:47.536 19:11:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:47.536 19:11:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:47.536 19:11:24 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.795 [2024-02-14 19:11:25.152632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.795 [2024-02-14 19:11:25.152660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.795 [2024-02-14 19:11:25.152678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b9b80 00:10:47.795 [2024-02-14 19:11:25.152686] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.795 [2024-02-14 19:11:25.152753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.795 [2024-02-14 19:11:25.152762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.795 [2024-02-14 19:11:25.152776] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:47.795 [2024-02-14 19:11:25.152782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.795 pt3 00:10:47.795 19:11:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:47.795 19:11:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:47.795 19:11:25 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:48.054 [2024-02-14 19:11:25.380681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:48.054 [2024-02-14 19:11:25.380711] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.054 [2024-02-14 19:11:25.380727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d2b9900 00:10:48.054 [2024-02-14 19:11:25.380734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.054 [2024-02-14 19:11:25.380805] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.054 [2024-02-14 19:11:25.380814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:48.054 [2024-02-14 19:11:25.380829] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:10:48.054 [2024-02-14 19:11:25.380838] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:48.054 [2024-02-14 19:11:25.380862] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d2b8c80 00:10:48.054 [2024-02-14 19:11:25.380865] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.054 [2024-02-14 19:11:25.380885] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d31be20 00:10:48.054 [2024-02-14 19:11:25.380935] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d2b8c80 00:10:48.054 [2024-02-14 19:11:25.380939] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82d2b8c80 00:10:48.054 [2024-02-14 19:11:25.380956] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.054 pt4 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.054 19:11:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.313 19:11:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:48.313 "name": "raid_bdev1", 00:10:48.313 "uuid": "d61359fe-cb6c-11ee-af6b-4feeebbbadda", 00:10:48.313 "strip_size_kb": 64, 00:10:48.313 "state": "online", 00:10:48.313 "raid_level": "raid0", 00:10:48.313 "superblock": true, 00:10:48.313 "num_base_bdevs": 4, 00:10:48.313 "num_base_bdevs_discovered": 4, 00:10:48.313 "num_base_bdevs_operational": 4, 00:10:48.313 "base_bdevs_list": [ 00:10:48.313 { 00:10:48.313 "name": "pt1", 00:10:48.313 "uuid": "a201d5a3-7f95-f35b-8d96-4856203f5e09", 00:10:48.313 "is_configured": true, 00:10:48.313 "data_offset": 2048, 00:10:48.313 "data_size": 63488 00:10:48.313 }, 00:10:48.313 { 00:10:48.313 "name": "pt2", 00:10:48.313 "uuid": "21f8a21b-4670-585f-b138-2e3e0bd120d7", 00:10:48.313 "is_configured": true, 00:10:48.313 "data_offset": 2048, 00:10:48.313 "data_size": 63488 00:10:48.313 }, 00:10:48.313 { 00:10:48.313 "name": "pt3", 00:10:48.313 "uuid": "d1f5cc3f-0034-3a50-8ac7-bc54fac6712f", 00:10:48.313 "is_configured": true, 00:10:48.313 "data_offset": 2048, 00:10:48.313 "data_size": 63488 00:10:48.313 }, 00:10:48.313 { 00:10:48.313 "name": "pt4", 00:10:48.313 "uuid": "d7612863-d412-4451-a477-b569db81df80", 00:10:48.313 "is_configured": true, 00:10:48.313 "data_offset": 2048, 00:10:48.313 "data_size": 63488 00:10:48.313 } 00:10:48.313 ] 00:10:48.313 }' 00:10:48.313 19:11:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:48.313 19:11:25 -- common/autotest_common.sh@10 -- # set +x 00:10:48.571 19:11:25 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:48.571 19:11:25 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:48.830 [2024-02-14 19:11:26.056763] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.830 19:11:26 -- bdev/bdev_raid.sh@430 -- # '[' d61359fe-cb6c-11ee-af6b-4feeebbbadda '!=' d61359fe-cb6c-11ee-af6b-4feeebbbadda ']' 00:10:48.830 19:11:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:10:48.830 19:11:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:48.830 19:11:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:48.830 19:11:26 -- bdev/bdev_raid.sh@511 -- # killprocess 53119 00:10:48.830 19:11:26 -- common/autotest_common.sh@924 -- # '[' -z 53119 ']' 00:10:48.830 19:11:26 -- common/autotest_common.sh@928 -- # kill -0 53119 00:10:48.830 19:11:26 -- common/autotest_common.sh@929 -- # uname 00:10:48.830 19:11:26 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:10:48.830 19:11:26 -- common/autotest_common.sh@932 -- # ps -c -o command 53119 00:10:48.830 19:11:26 -- common/autotest_common.sh@932 -- # tail -1 00:10:48.830 19:11:26 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:10:48.830 19:11:26 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:10:48.830 killing process with pid 53119 00:10:48.830 19:11:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 53119' 00:10:48.830 19:11:26 -- common/autotest_common.sh@943 -- # kill 53119 00:10:48.830 [2024-02-14 19:11:26.085320] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.830 [2024-02-14 19:11:26.085366] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.830 [2024-02-14 19:11:26.085384] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.830 [2024-02-14 19:11:26.085388] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d2b8c80 name raid_bdev1, state offline 00:10:48.830 19:11:26 -- common/autotest_common.sh@948 -- # wait 53119 00:10:48.830 [2024-02-14 19:11:26.121643] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:49.089 00:10:49.089 real 0m9.217s 00:10:49.089 user 0m15.656s 00:10:49.089 sys 0m1.875s 00:10:49.089 19:11:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:49.089 19:11:26 -- common/autotest_common.sh@10 -- # set +x 00:10:49.089 ************************************ 00:10:49.089 END TEST raid_superblock_test 00:10:49.089 ************************************ 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:49.089 19:11:26 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:10:49.089 19:11:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:49.089 19:11:26 -- common/autotest_common.sh@10 -- # set +x 00:10:49.089 ************************************ 00:10:49.089 START TEST raid_state_function_test 00:10:49.089 ************************************ 00:10:49.089 19:11:26 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 4 false 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:49.089 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=53304 00:10:49.090 Process raid pid: 53304 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53304' 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53304 /var/tmp/spdk-raid.sock 00:10:49.090 19:11:26 -- common/autotest_common.sh@817 -- # '[' -z 53304 ']' 00:10:49.090 19:11:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:49.090 19:11:26 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:49.090 19:11:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:49.090 19:11:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:49.090 19:11:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.090 19:11:26 -- common/autotest_common.sh@10 -- # set +x 00:10:49.090 [2024-02-14 19:11:26.420988] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:49.090 [2024-02-14 19:11:26.421274] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:49.656 EAL: TSC is not safe to use in SMP mode 00:10:49.656 EAL: TSC is not invariant 00:10:49.656 [2024-02-14 19:11:26.883185] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.656 [2024-02-14 19:11:27.002174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.656 [2024-02-14 19:11:27.002700] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.656 [2024-02-14 19:11:27.002715] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.223 19:11:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.223 19:11:27 -- common/autotest_common.sh@850 -- # return 0 00:10:50.223 19:11:27 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:50.481 [2024-02-14 19:11:27.654365] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.481 [2024-02-14 19:11:27.654449] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.482 [2024-02-14 19:11:27.654454] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.482 [2024-02-14 19:11:27.654463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.482 [2024-02-14 19:11:27.654467] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.482 [2024-02-14 19:11:27.654474] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.482 [2024-02-14 19:11:27.654477] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:50.482 [2024-02-14 19:11:27.654483] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:50.482 "name": "Existed_Raid", 00:10:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.482 "strip_size_kb": 64, 00:10:50.482 "state": "configuring", 00:10:50.482 "raid_level": "concat", 00:10:50.482 "superblock": false, 00:10:50.482 "num_base_bdevs": 4, 00:10:50.482 "num_base_bdevs_discovered": 0, 00:10:50.482 "num_base_bdevs_operational": 4, 00:10:50.482 "base_bdevs_list": [ 00:10:50.482 { 00:10:50.482 "name": "BaseBdev1", 00:10:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.482 "is_configured": false, 00:10:50.482 "data_offset": 0, 00:10:50.482 "data_size": 0 00:10:50.482 }, 00:10:50.482 { 00:10:50.482 "name": "BaseBdev2", 00:10:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.482 "is_configured": false, 00:10:50.482 "data_offset": 0, 00:10:50.482 "data_size": 0 00:10:50.482 }, 00:10:50.482 { 00:10:50.482 "name": "BaseBdev3", 00:10:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.482 "is_configured": false, 00:10:50.482 "data_offset": 0, 00:10:50.482 "data_size": 0 00:10:50.482 }, 00:10:50.482 { 00:10:50.482 "name": "BaseBdev4", 00:10:50.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.482 "is_configured": false, 00:10:50.482 "data_offset": 0, 00:10:50.482 "data_size": 0 00:10:50.482 } 00:10:50.482 ] 00:10:50.482 }' 00:10:50.482 19:11:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:50.482 19:11:27 -- common/autotest_common.sh@10 -- # set +x 00:10:50.740 19:11:28 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:50.999 [2024-02-14 19:11:28.350362] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:50.999 [2024-02-14 19:11:28.350391] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d7e2500 name Existed_Raid, state configuring 00:10:50.999 19:11:28 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:51.258 [2024-02-14 19:11:28.586391] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.258 [2024-02-14 19:11:28.586450] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.258 [2024-02-14 19:11:28.586455] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.258 [2024-02-14 19:11:28.586464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.258 [2024-02-14 19:11:28.586467] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.258 [2024-02-14 19:11:28.586475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.258 [2024-02-14 19:11:28.586478] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:51.258 [2024-02-14 19:11:28.586485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:51.258 19:11:28 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.517 [2024-02-14 19:11:28.855737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.517 BaseBdev1 00:10:51.517 19:11:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:51.517 19:11:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:10:51.517 19:11:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:51.517 19:11:28 -- common/autotest_common.sh@887 -- # local i 00:10:51.517 19:11:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:51.517 19:11:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:51.517 19:11:28 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.775 19:11:29 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:52.034 [ 00:10:52.034 { 00:10:52.034 "name": "BaseBdev1", 00:10:52.034 "aliases": [ 00:10:52.034 "db240f7f-cb6c-11ee-af6b-4feeebbbadda" 00:10:52.034 ], 00:10:52.034 "product_name": "Malloc disk", 00:10:52.034 "block_size": 512, 00:10:52.034 "num_blocks": 65536, 00:10:52.034 "uuid": "db240f7f-cb6c-11ee-af6b-4feeebbbadda", 00:10:52.034 "assigned_rate_limits": { 00:10:52.034 "rw_ios_per_sec": 0, 00:10:52.034 "rw_mbytes_per_sec": 0, 00:10:52.034 "r_mbytes_per_sec": 0, 00:10:52.034 "w_mbytes_per_sec": 0 00:10:52.034 }, 00:10:52.034 "claimed": true, 00:10:52.034 "claim_type": "exclusive_write", 00:10:52.034 "zoned": false, 00:10:52.034 "supported_io_types": { 00:10:52.034 "read": true, 00:10:52.034 "write": true, 00:10:52.034 "unmap": true, 00:10:52.034 "write_zeroes": true, 00:10:52.034 "flush": true, 00:10:52.034 "reset": true, 00:10:52.034 "compare": false, 00:10:52.034 "compare_and_write": false, 00:10:52.034 "abort": true, 00:10:52.034 "nvme_admin": false, 00:10:52.034 "nvme_io": false 00:10:52.034 }, 00:10:52.034 "memory_domains": [ 00:10:52.034 { 00:10:52.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.034 "dma_device_type": 2 00:10:52.034 } 00:10:52.034 ], 00:10:52.034 "driver_specific": {} 00:10:52.034 } 00:10:52.034 ] 00:10:52.034 19:11:29 -- common/autotest_common.sh@893 -- # return 0 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.034 19:11:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.292 19:11:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:52.292 "name": "Existed_Raid", 00:10:52.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.292 "strip_size_kb": 64, 00:10:52.292 "state": "configuring", 00:10:52.292 "raid_level": "concat", 00:10:52.292 "superblock": false, 00:10:52.292 "num_base_bdevs": 4, 00:10:52.292 "num_base_bdevs_discovered": 1, 00:10:52.292 "num_base_bdevs_operational": 4, 00:10:52.292 "base_bdevs_list": [ 00:10:52.292 { 00:10:52.292 "name": "BaseBdev1", 00:10:52.292 "uuid": "db240f7f-cb6c-11ee-af6b-4feeebbbadda", 00:10:52.292 "is_configured": true, 00:10:52.292 "data_offset": 0, 00:10:52.292 "data_size": 65536 00:10:52.292 }, 00:10:52.292 { 00:10:52.292 "name": "BaseBdev2", 00:10:52.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.292 "is_configured": false, 00:10:52.292 "data_offset": 0, 00:10:52.292 "data_size": 0 00:10:52.292 }, 00:10:52.292 { 00:10:52.292 "name": "BaseBdev3", 00:10:52.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.292 "is_configured": false, 00:10:52.292 "data_offset": 0, 00:10:52.292 "data_size": 0 00:10:52.292 }, 00:10:52.292 { 00:10:52.292 "name": "BaseBdev4", 00:10:52.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.292 "is_configured": false, 00:10:52.292 "data_offset": 0, 00:10:52.292 "data_size": 0 00:10:52.292 } 00:10:52.292 ] 00:10:52.292 }' 00:10:52.292 19:11:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:52.292 19:11:29 -- common/autotest_common.sh@10 -- # set +x 00:10:52.550 19:11:29 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:52.809 [2024-02-14 19:11:29.998506] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.809 [2024-02-14 19:11:29.998544] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d7e2500 name Existed_Raid, state configuring 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:52.809 [2024-02-14 19:11:30.182538] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.809 [2024-02-14 19:11:30.183632] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.809 [2024-02-14 19:11:30.183681] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.809 [2024-02-14 19:11:30.183687] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.809 [2024-02-14 19:11:30.183695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.809 [2024-02-14 19:11:30.183699] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.809 [2024-02-14 19:11:30.183706] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.809 19:11:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.068 19:11:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:53.068 "name": "Existed_Raid", 00:10:53.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.068 "strip_size_kb": 64, 00:10:53.068 "state": "configuring", 00:10:53.068 "raid_level": "concat", 00:10:53.068 "superblock": false, 00:10:53.068 "num_base_bdevs": 4, 00:10:53.068 "num_base_bdevs_discovered": 1, 00:10:53.068 "num_base_bdevs_operational": 4, 00:10:53.068 "base_bdevs_list": [ 00:10:53.068 { 00:10:53.068 "name": "BaseBdev1", 00:10:53.068 "uuid": "db240f7f-cb6c-11ee-af6b-4feeebbbadda", 00:10:53.068 "is_configured": true, 00:10:53.068 "data_offset": 0, 00:10:53.068 "data_size": 65536 00:10:53.068 }, 00:10:53.068 { 00:10:53.068 "name": "BaseBdev2", 00:10:53.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.068 "is_configured": false, 00:10:53.068 "data_offset": 0, 00:10:53.068 "data_size": 0 00:10:53.068 }, 00:10:53.068 { 00:10:53.068 "name": "BaseBdev3", 00:10:53.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.068 "is_configured": false, 00:10:53.068 "data_offset": 0, 00:10:53.068 "data_size": 0 00:10:53.068 }, 00:10:53.068 { 00:10:53.068 "name": "BaseBdev4", 00:10:53.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.068 "is_configured": false, 00:10:53.068 "data_offset": 0, 00:10:53.068 "data_size": 0 00:10:53.068 } 00:10:53.068 ] 00:10:53.068 }' 00:10:53.068 19:11:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:53.068 19:11:30 -- common/autotest_common.sh@10 -- # set +x 00:10:53.327 19:11:30 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:53.585 [2024-02-14 19:11:30.838810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.585 BaseBdev2 00:10:53.585 19:11:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:53.585 19:11:30 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:10:53.585 19:11:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:53.585 19:11:30 -- common/autotest_common.sh@887 -- # local i 00:10:53.585 19:11:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:53.585 19:11:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:53.585 19:11:30 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:53.844 19:11:31 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:53.844 [ 00:10:53.844 { 00:10:53.844 "name": "BaseBdev2", 00:10:53.844 "aliases": [ 00:10:53.844 "dc52d4c1-cb6c-11ee-af6b-4feeebbbadda" 00:10:53.844 ], 00:10:53.844 "product_name": "Malloc disk", 00:10:53.844 "block_size": 512, 00:10:53.844 "num_blocks": 65536, 00:10:53.844 "uuid": "dc52d4c1-cb6c-11ee-af6b-4feeebbbadda", 00:10:53.844 "assigned_rate_limits": { 00:10:53.844 "rw_ios_per_sec": 0, 00:10:53.844 "rw_mbytes_per_sec": 0, 00:10:53.844 "r_mbytes_per_sec": 0, 00:10:53.844 "w_mbytes_per_sec": 0 00:10:53.844 }, 00:10:53.844 "claimed": true, 00:10:53.844 "claim_type": "exclusive_write", 00:10:53.844 "zoned": false, 00:10:53.844 "supported_io_types": { 00:10:53.844 "read": true, 00:10:53.844 "write": true, 00:10:53.844 "unmap": true, 00:10:53.844 "write_zeroes": true, 00:10:53.844 "flush": true, 00:10:53.844 "reset": true, 00:10:53.844 "compare": false, 00:10:53.844 "compare_and_write": false, 00:10:53.844 "abort": true, 00:10:53.844 "nvme_admin": false, 00:10:53.844 "nvme_io": false 00:10:53.844 }, 00:10:53.844 "memory_domains": [ 00:10:53.844 { 00:10:53.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.844 "dma_device_type": 2 00:10:53.844 } 00:10:53.844 ], 00:10:53.844 "driver_specific": {} 00:10:53.844 } 00:10:53.844 ] 00:10:53.844 19:11:31 -- common/autotest_common.sh@893 -- # return 0 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.844 19:11:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.102 19:11:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:54.102 "name": "Existed_Raid", 00:10:54.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.102 "strip_size_kb": 64, 00:10:54.102 "state": "configuring", 00:10:54.102 "raid_level": "concat", 00:10:54.102 "superblock": false, 00:10:54.102 "num_base_bdevs": 4, 00:10:54.102 "num_base_bdevs_discovered": 2, 00:10:54.102 "num_base_bdevs_operational": 4, 00:10:54.102 "base_bdevs_list": [ 00:10:54.102 { 00:10:54.102 "name": "BaseBdev1", 00:10:54.102 "uuid": "db240f7f-cb6c-11ee-af6b-4feeebbbadda", 00:10:54.102 "is_configured": true, 00:10:54.102 "data_offset": 0, 00:10:54.102 "data_size": 65536 00:10:54.102 }, 00:10:54.102 { 00:10:54.102 "name": "BaseBdev2", 00:10:54.102 "uuid": "dc52d4c1-cb6c-11ee-af6b-4feeebbbadda", 00:10:54.102 "is_configured": true, 00:10:54.102 "data_offset": 0, 00:10:54.102 "data_size": 65536 00:10:54.102 }, 00:10:54.103 { 00:10:54.103 "name": "BaseBdev3", 00:10:54.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.103 "is_configured": false, 00:10:54.103 "data_offset": 0, 00:10:54.103 "data_size": 0 00:10:54.103 }, 00:10:54.103 { 00:10:54.103 "name": "BaseBdev4", 00:10:54.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.103 "is_configured": false, 00:10:54.103 "data_offset": 0, 00:10:54.103 "data_size": 0 00:10:54.103 } 00:10:54.103 ] 00:10:54.103 }' 00:10:54.103 19:11:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:54.103 19:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 19:11:31 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:54.620 [2024-02-14 19:11:31.922938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.620 BaseBdev3 00:10:54.620 19:11:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:54.620 19:11:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:10:54.620 19:11:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:54.620 19:11:31 -- common/autotest_common.sh@887 -- # local i 00:10:54.620 19:11:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:54.620 19:11:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:54.620 19:11:31 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:54.878 19:11:32 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.137 [ 00:10:55.137 { 00:10:55.137 "name": "BaseBdev3", 00:10:55.137 "aliases": [ 00:10:55.137 "dcf8418c-cb6c-11ee-af6b-4feeebbbadda" 00:10:55.137 ], 00:10:55.137 "product_name": "Malloc disk", 00:10:55.137 "block_size": 512, 00:10:55.137 "num_blocks": 65536, 00:10:55.137 "uuid": "dcf8418c-cb6c-11ee-af6b-4feeebbbadda", 00:10:55.137 "assigned_rate_limits": { 00:10:55.137 "rw_ios_per_sec": 0, 00:10:55.137 "rw_mbytes_per_sec": 0, 00:10:55.137 "r_mbytes_per_sec": 0, 00:10:55.137 "w_mbytes_per_sec": 0 00:10:55.137 }, 00:10:55.137 "claimed": true, 00:10:55.137 "claim_type": "exclusive_write", 00:10:55.137 "zoned": false, 00:10:55.137 "supported_io_types": { 00:10:55.137 "read": true, 00:10:55.137 "write": true, 00:10:55.137 "unmap": true, 00:10:55.137 "write_zeroes": true, 00:10:55.137 "flush": true, 00:10:55.137 "reset": true, 00:10:55.137 "compare": false, 00:10:55.137 "compare_and_write": false, 00:10:55.137 "abort": true, 00:10:55.137 "nvme_admin": false, 00:10:55.137 "nvme_io": false 00:10:55.137 }, 00:10:55.137 "memory_domains": [ 00:10:55.137 { 00:10:55.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.137 "dma_device_type": 2 00:10:55.137 } 00:10:55.137 ], 00:10:55.137 "driver_specific": {} 00:10:55.137 } 00:10:55.137 ] 00:10:55.137 19:11:32 -- common/autotest_common.sh@893 -- # return 0 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:55.137 "name": "Existed_Raid", 00:10:55.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.137 "strip_size_kb": 64, 00:10:55.137 "state": "configuring", 00:10:55.137 "raid_level": "concat", 00:10:55.137 "superblock": false, 00:10:55.137 "num_base_bdevs": 4, 00:10:55.137 "num_base_bdevs_discovered": 3, 00:10:55.137 "num_base_bdevs_operational": 4, 00:10:55.137 "base_bdevs_list": [ 00:10:55.137 { 00:10:55.137 "name": "BaseBdev1", 00:10:55.137 "uuid": "db240f7f-cb6c-11ee-af6b-4feeebbbadda", 00:10:55.137 "is_configured": true, 00:10:55.137 "data_offset": 0, 00:10:55.137 "data_size": 65536 00:10:55.137 }, 00:10:55.137 { 00:10:55.137 "name": "BaseBdev2", 00:10:55.137 "uuid": "dc52d4c1-cb6c-11ee-af6b-4feeebbbadda", 00:10:55.137 "is_configured": true, 00:10:55.137 "data_offset": 0, 00:10:55.137 "data_size": 65536 00:10:55.137 }, 00:10:55.137 { 00:10:55.137 "name": "BaseBdev3", 00:10:55.137 "uuid": "dcf8418c-cb6c-11ee-af6b-4feeebbbadda", 00:10:55.137 "is_configured": true, 00:10:55.137 "data_offset": 0, 00:10:55.137 "data_size": 65536 00:10:55.137 }, 00:10:55.137 { 00:10:55.137 "name": "BaseBdev4", 00:10:55.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.137 "is_configured": false, 00:10:55.137 "data_offset": 0, 00:10:55.137 "data_size": 0 00:10:55.137 } 00:10:55.137 ] 00:10:55.137 }' 00:10:55.137 19:11:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:55.137 19:11:32 -- common/autotest_common.sh@10 -- # set +x 00:10:55.705 19:11:32 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.705 [2024-02-14 19:11:32.983084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.705 [2024-02-14 19:11:32.983126] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82d7e2a00 00:10:55.705 [2024-02-14 19:11:32.983130] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:55.705 [2024-02-14 19:11:32.983153] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82d845ec0 00:10:55.705 [2024-02-14 19:11:32.983274] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82d7e2a00 00:10:55.705 [2024-02-14 19:11:32.983278] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82d7e2a00 00:10:55.705 [2024-02-14 19:11:32.983311] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.705 BaseBdev4 00:10:55.705 19:11:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:10:55.705 19:11:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:10:55.705 19:11:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:10:55.705 19:11:32 -- common/autotest_common.sh@887 -- # local i 00:10:55.705 19:11:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:10:55.705 19:11:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:10:55.705 19:11:32 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:55.964 19:11:33 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:56.222 [ 00:10:56.222 { 00:10:56.222 "name": "BaseBdev4", 00:10:56.222 "aliases": [ 00:10:56.222 "dd9a05b0-cb6c-11ee-af6b-4feeebbbadda" 00:10:56.222 ], 00:10:56.222 "product_name": "Malloc disk", 00:10:56.222 "block_size": 512, 00:10:56.222 "num_blocks": 65536, 00:10:56.222 "uuid": "dd9a05b0-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.222 "assigned_rate_limits": { 00:10:56.222 "rw_ios_per_sec": 0, 00:10:56.222 "rw_mbytes_per_sec": 0, 00:10:56.222 "r_mbytes_per_sec": 0, 00:10:56.222 "w_mbytes_per_sec": 0 00:10:56.222 }, 00:10:56.222 "claimed": true, 00:10:56.222 "claim_type": "exclusive_write", 00:10:56.222 "zoned": false, 00:10:56.222 "supported_io_types": { 00:10:56.222 "read": true, 00:10:56.222 "write": true, 00:10:56.222 "unmap": true, 00:10:56.222 "write_zeroes": true, 00:10:56.222 "flush": true, 00:10:56.222 "reset": true, 00:10:56.222 "compare": false, 00:10:56.222 "compare_and_write": false, 00:10:56.222 "abort": true, 00:10:56.222 "nvme_admin": false, 00:10:56.222 "nvme_io": false 00:10:56.222 }, 00:10:56.222 "memory_domains": [ 00:10:56.222 { 00:10:56.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.222 "dma_device_type": 2 00:10:56.222 } 00:10:56.222 ], 00:10:56.222 "driver_specific": {} 00:10:56.222 } 00:10:56.222 ] 00:10:56.222 19:11:33 -- common/autotest_common.sh@893 -- # return 0 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.222 19:11:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.480 19:11:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:56.480 "name": "Existed_Raid", 00:10:56.480 "uuid": "dd9a0dc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.480 "strip_size_kb": 64, 00:10:56.480 "state": "online", 00:10:56.480 "raid_level": "concat", 00:10:56.480 "superblock": false, 00:10:56.480 "num_base_bdevs": 4, 00:10:56.480 "num_base_bdevs_discovered": 4, 00:10:56.480 "num_base_bdevs_operational": 4, 00:10:56.480 "base_bdevs_list": [ 00:10:56.480 { 00:10:56.480 "name": "BaseBdev1", 00:10:56.480 "uuid": "db240f7f-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.480 "is_configured": true, 00:10:56.480 "data_offset": 0, 00:10:56.480 "data_size": 65536 00:10:56.480 }, 00:10:56.480 { 00:10:56.480 "name": "BaseBdev2", 00:10:56.480 "uuid": "dc52d4c1-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.480 "is_configured": true, 00:10:56.480 "data_offset": 0, 00:10:56.480 "data_size": 65536 00:10:56.480 }, 00:10:56.480 { 00:10:56.480 "name": "BaseBdev3", 00:10:56.480 "uuid": "dcf8418c-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.481 "is_configured": true, 00:10:56.481 "data_offset": 0, 00:10:56.481 "data_size": 65536 00:10:56.481 }, 00:10:56.481 { 00:10:56.481 "name": "BaseBdev4", 00:10:56.481 "uuid": "dd9a05b0-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.481 "is_configured": true, 00:10:56.481 "data_offset": 0, 00:10:56.481 "data_size": 65536 00:10:56.481 } 00:10:56.481 ] 00:10:56.481 }' 00:10:56.481 19:11:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:56.481 19:11:33 -- common/autotest_common.sh@10 -- # set +x 00:10:56.738 19:11:33 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:56.738 [2024-02-14 19:11:34.090959] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.738 [2024-02-14 19:11:34.090983] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.738 [2024-02-14 19:11:34.090993] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.738 19:11:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.996 19:11:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:56.996 "name": "Existed_Raid", 00:10:56.996 "uuid": "dd9a0dc7-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.996 "strip_size_kb": 64, 00:10:56.996 "state": "offline", 00:10:56.996 "raid_level": "concat", 00:10:56.996 "superblock": false, 00:10:56.996 "num_base_bdevs": 4, 00:10:56.996 "num_base_bdevs_discovered": 3, 00:10:56.996 "num_base_bdevs_operational": 3, 00:10:56.996 "base_bdevs_list": [ 00:10:56.996 { 00:10:56.996 "name": null, 00:10:56.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.996 "is_configured": false, 00:10:56.996 "data_offset": 0, 00:10:56.996 "data_size": 65536 00:10:56.996 }, 00:10:56.996 { 00:10:56.996 "name": "BaseBdev2", 00:10:56.996 "uuid": "dc52d4c1-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.996 "is_configured": true, 00:10:56.996 "data_offset": 0, 00:10:56.996 "data_size": 65536 00:10:56.996 }, 00:10:56.996 { 00:10:56.997 "name": "BaseBdev3", 00:10:56.997 "uuid": "dcf8418c-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.997 "is_configured": true, 00:10:56.997 "data_offset": 0, 00:10:56.997 "data_size": 65536 00:10:56.997 }, 00:10:56.997 { 00:10:56.997 "name": "BaseBdev4", 00:10:56.997 "uuid": "dd9a05b0-cb6c-11ee-af6b-4feeebbbadda", 00:10:56.997 "is_configured": true, 00:10:56.997 "data_offset": 0, 00:10:56.997 "data_size": 65536 00:10:56.997 } 00:10:56.997 ] 00:10:56.997 }' 00:10:56.997 19:11:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:56.997 19:11:34 -- common/autotest_common.sh@10 -- # set +x 00:10:57.254 19:11:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:57.254 19:11:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:57.254 19:11:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:57.254 19:11:34 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.513 19:11:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:57.513 19:11:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.513 19:11:34 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:57.771 [2024-02-14 19:11:35.123591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.771 19:11:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:57.771 19:11:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:57.771 19:11:35 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.771 19:11:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:58.049 19:11:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:58.049 19:11:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.049 19:11:35 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:58.352 [2024-02-14 19:11:35.532165] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:58.352 19:11:35 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:10:58.611 [2024-02-14 19:11:35.920817] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:58.611 [2024-02-14 19:11:35.920837] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82d7e2a00 name Existed_Raid, state offline 00:10:58.611 19:11:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:58.611 19:11:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:58.611 19:11:35 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.611 19:11:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:58.869 19:11:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:58.869 19:11:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:58.869 19:11:36 -- bdev/bdev_raid.sh@287 -- # killprocess 53304 00:10:58.869 19:11:36 -- common/autotest_common.sh@924 -- # '[' -z 53304 ']' 00:10:58.869 19:11:36 -- common/autotest_common.sh@928 -- # kill -0 53304 00:10:58.869 19:11:36 -- common/autotest_common.sh@929 -- # uname 00:10:58.869 19:11:36 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:10:58.869 19:11:36 -- common/autotest_common.sh@932 -- # ps -c -o command 53304 00:10:58.869 19:11:36 -- common/autotest_common.sh@932 -- # tail -1 00:10:58.869 19:11:36 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:10:58.869 killing process with pid 53304 00:10:58.869 19:11:36 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:10:58.870 19:11:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 53304' 00:10:58.870 19:11:36 -- common/autotest_common.sh@943 -- # kill 53304 00:10:58.870 [2024-02-14 19:11:36.178204] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.870 [2024-02-14 19:11:36.178238] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.870 19:11:36 -- common/autotest_common.sh@948 -- # wait 53304 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:59.128 00:10:59.128 real 0m9.907s 00:10:59.128 user 0m17.418s 00:10:59.128 sys 0m1.688s 00:10:59.128 ************************************ 00:10:59.128 END TEST raid_state_function_test 00:10:59.128 ************************************ 00:10:59.128 19:11:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:59.128 19:11:36 -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:59.128 19:11:36 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:10:59.128 19:11:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:59.128 19:11:36 -- common/autotest_common.sh@10 -- # set +x 00:10:59.128 ************************************ 00:10:59.128 START TEST raid_state_function_test_sb 00:10:59.128 ************************************ 00:10:59.128 19:11:36 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 4 true 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:59.128 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=53574 00:10:59.129 Process raid pid: 53574 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53574' 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53574 /var/tmp/spdk-raid.sock 00:10:59.129 19:11:36 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:59.129 19:11:36 -- common/autotest_common.sh@817 -- # '[' -z 53574 ']' 00:10:59.129 19:11:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:59.129 19:11:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:59.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:59.129 19:11:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:59.129 19:11:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:59.129 19:11:36 -- common/autotest_common.sh@10 -- # set +x 00:10:59.129 [2024-02-14 19:11:36.368489] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:10:59.129 [2024-02-14 19:11:36.368714] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:59.696 EAL: TSC is not safe to use in SMP mode 00:10:59.696 EAL: TSC is not invariant 00:10:59.696 [2024-02-14 19:11:36.816791] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.696 [2024-02-14 19:11:36.893734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.696 [2024-02-14 19:11:36.894165] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.696 [2024-02-14 19:11:36.894174] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.955 19:11:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:59.955 19:11:37 -- common/autotest_common.sh@850 -- # return 0 00:10:59.955 19:11:37 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:00.214 [2024-02-14 19:11:37.532378] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:00.214 [2024-02-14 19:11:37.532422] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:00.214 [2024-02-14 19:11:37.532426] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:00.214 [2024-02-14 19:11:37.532433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:00.214 [2024-02-14 19:11:37.532436] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:00.214 [2024-02-14 19:11:37.532442] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:00.214 [2024-02-14 19:11:37.532445] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:00.214 [2024-02-14 19:11:37.532451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.214 19:11:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.472 19:11:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:00.472 "name": "Existed_Raid", 00:11:00.472 "uuid": "e0503656-cb6c-11ee-af6b-4feeebbbadda", 00:11:00.472 "strip_size_kb": 64, 00:11:00.472 "state": "configuring", 00:11:00.472 "raid_level": "concat", 00:11:00.472 "superblock": true, 00:11:00.472 "num_base_bdevs": 4, 00:11:00.472 "num_base_bdevs_discovered": 0, 00:11:00.472 "num_base_bdevs_operational": 4, 00:11:00.472 "base_bdevs_list": [ 00:11:00.472 { 00:11:00.472 "name": "BaseBdev1", 00:11:00.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.472 "is_configured": false, 00:11:00.472 "data_offset": 0, 00:11:00.472 "data_size": 0 00:11:00.472 }, 00:11:00.472 { 00:11:00.472 "name": "BaseBdev2", 00:11:00.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.472 "is_configured": false, 00:11:00.472 "data_offset": 0, 00:11:00.472 "data_size": 0 00:11:00.472 }, 00:11:00.472 { 00:11:00.472 "name": "BaseBdev3", 00:11:00.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.472 "is_configured": false, 00:11:00.472 "data_offset": 0, 00:11:00.472 "data_size": 0 00:11:00.472 }, 00:11:00.472 { 00:11:00.472 "name": "BaseBdev4", 00:11:00.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.472 "is_configured": false, 00:11:00.472 "data_offset": 0, 00:11:00.473 "data_size": 0 00:11:00.473 } 00:11:00.473 ] 00:11:00.473 }' 00:11:00.473 19:11:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:00.473 19:11:37 -- common/autotest_common.sh@10 -- # set +x 00:11:00.732 19:11:38 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:00.990 [2024-02-14 19:11:38.236365] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.990 [2024-02-14 19:11:38.236381] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5b9500 name Existed_Raid, state configuring 00:11:00.990 19:11:38 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:01.249 [2024-02-14 19:11:38.484379] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.249 [2024-02-14 19:11:38.484414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.249 [2024-02-14 19:11:38.484417] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.249 [2024-02-14 19:11:38.484424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.249 [2024-02-14 19:11:38.484426] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.249 [2024-02-14 19:11:38.484448] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.249 [2024-02-14 19:11:38.484451] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.249 [2024-02-14 19:11:38.484457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.249 19:11:38 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.249 [2024-02-14 19:11:38.665201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.508 BaseBdev1 00:11:01.508 19:11:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:01.508 19:11:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:11:01.508 19:11:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:01.508 19:11:38 -- common/autotest_common.sh@887 -- # local i 00:11:01.508 19:11:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:01.508 19:11:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:01.508 19:11:38 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:01.767 19:11:38 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.767 [ 00:11:01.767 { 00:11:01.767 "name": "BaseBdev1", 00:11:01.767 "aliases": [ 00:11:01.767 "e0fcf143-cb6c-11ee-af6b-4feeebbbadda" 00:11:01.767 ], 00:11:01.767 "product_name": "Malloc disk", 00:11:01.767 "block_size": 512, 00:11:01.767 "num_blocks": 65536, 00:11:01.767 "uuid": "e0fcf143-cb6c-11ee-af6b-4feeebbbadda", 00:11:01.767 "assigned_rate_limits": { 00:11:01.767 "rw_ios_per_sec": 0, 00:11:01.767 "rw_mbytes_per_sec": 0, 00:11:01.767 "r_mbytes_per_sec": 0, 00:11:01.767 "w_mbytes_per_sec": 0 00:11:01.767 }, 00:11:01.767 "claimed": true, 00:11:01.767 "claim_type": "exclusive_write", 00:11:01.767 "zoned": false, 00:11:01.767 "supported_io_types": { 00:11:01.767 "read": true, 00:11:01.767 "write": true, 00:11:01.767 "unmap": true, 00:11:01.767 "write_zeroes": true, 00:11:01.767 "flush": true, 00:11:01.767 "reset": true, 00:11:01.767 "compare": false, 00:11:01.767 "compare_and_write": false, 00:11:01.767 "abort": true, 00:11:01.767 "nvme_admin": false, 00:11:01.767 "nvme_io": false 00:11:01.767 }, 00:11:01.767 "memory_domains": [ 00:11:01.767 { 00:11:01.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.767 "dma_device_type": 2 00:11:01.767 } 00:11:01.767 ], 00:11:01.767 "driver_specific": {} 00:11:01.767 } 00:11:01.767 ] 00:11:01.767 19:11:39 -- common/autotest_common.sh@893 -- # return 0 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.767 19:11:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.025 19:11:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:02.025 "name": "Existed_Raid", 00:11:02.025 "uuid": "e0e179fd-cb6c-11ee-af6b-4feeebbbadda", 00:11:02.025 "strip_size_kb": 64, 00:11:02.025 "state": "configuring", 00:11:02.025 "raid_level": "concat", 00:11:02.025 "superblock": true, 00:11:02.025 "num_base_bdevs": 4, 00:11:02.025 "num_base_bdevs_discovered": 1, 00:11:02.025 "num_base_bdevs_operational": 4, 00:11:02.025 "base_bdevs_list": [ 00:11:02.025 { 00:11:02.025 "name": "BaseBdev1", 00:11:02.025 "uuid": "e0fcf143-cb6c-11ee-af6b-4feeebbbadda", 00:11:02.025 "is_configured": true, 00:11:02.025 "data_offset": 2048, 00:11:02.025 "data_size": 63488 00:11:02.025 }, 00:11:02.025 { 00:11:02.025 "name": "BaseBdev2", 00:11:02.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.025 "is_configured": false, 00:11:02.025 "data_offset": 0, 00:11:02.025 "data_size": 0 00:11:02.025 }, 00:11:02.025 { 00:11:02.025 "name": "BaseBdev3", 00:11:02.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.025 "is_configured": false, 00:11:02.025 "data_offset": 0, 00:11:02.025 "data_size": 0 00:11:02.025 }, 00:11:02.025 { 00:11:02.025 "name": "BaseBdev4", 00:11:02.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.025 "is_configured": false, 00:11:02.025 "data_offset": 0, 00:11:02.025 "data_size": 0 00:11:02.025 } 00:11:02.025 ] 00:11:02.025 }' 00:11:02.025 19:11:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:02.025 19:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:02.284 19:11:39 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:02.543 [2024-02-14 19:11:39.872402] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.543 [2024-02-14 19:11:39.872423] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5b9500 name Existed_Raid, state configuring 00:11:02.543 19:11:39 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:02.543 19:11:39 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:02.802 19:11:40 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.060 BaseBdev1 00:11:03.060 19:11:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:03.060 19:11:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:11:03.060 19:11:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:03.060 19:11:40 -- common/autotest_common.sh@887 -- # local i 00:11:03.060 19:11:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:03.060 19:11:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:03.060 19:11:40 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:03.318 19:11:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.318 [ 00:11:03.318 { 00:11:03.318 "name": "BaseBdev1", 00:11:03.318 "aliases": [ 00:11:03.318 "e1f74748-cb6c-11ee-af6b-4feeebbbadda" 00:11:03.318 ], 00:11:03.318 "product_name": "Malloc disk", 00:11:03.318 "block_size": 512, 00:11:03.318 "num_blocks": 65536, 00:11:03.318 "uuid": "e1f74748-cb6c-11ee-af6b-4feeebbbadda", 00:11:03.318 "assigned_rate_limits": { 00:11:03.318 "rw_ios_per_sec": 0, 00:11:03.318 "rw_mbytes_per_sec": 0, 00:11:03.318 "r_mbytes_per_sec": 0, 00:11:03.318 "w_mbytes_per_sec": 0 00:11:03.318 }, 00:11:03.318 "claimed": false, 00:11:03.318 "zoned": false, 00:11:03.318 "supported_io_types": { 00:11:03.318 "read": true, 00:11:03.318 "write": true, 00:11:03.318 "unmap": true, 00:11:03.318 "write_zeroes": true, 00:11:03.318 "flush": true, 00:11:03.318 "reset": true, 00:11:03.318 "compare": false, 00:11:03.318 "compare_and_write": false, 00:11:03.318 "abort": true, 00:11:03.318 "nvme_admin": false, 00:11:03.318 "nvme_io": false 00:11:03.318 }, 00:11:03.318 "memory_domains": [ 00:11:03.318 { 00:11:03.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.318 "dma_device_type": 2 00:11:03.318 } 00:11:03.318 ], 00:11:03.318 "driver_specific": {} 00:11:03.318 } 00:11:03.318 ] 00:11:03.319 19:11:40 -- common/autotest_common.sh@893 -- # return 0 00:11:03.319 19:11:40 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:03.577 [2024-02-14 19:11:40.924966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.577 [2024-02-14 19:11:40.925367] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.577 [2024-02-14 19:11:40.925416] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.577 [2024-02-14 19:11:40.925421] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:03.577 [2024-02-14 19:11:40.925428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:03.577 [2024-02-14 19:11:40.925431] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:03.577 [2024-02-14 19:11:40.925437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.577 19:11:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.835 19:11:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:03.835 "name": "Existed_Raid", 00:11:03.835 "uuid": "e255e163-cb6c-11ee-af6b-4feeebbbadda", 00:11:03.835 "strip_size_kb": 64, 00:11:03.835 "state": "configuring", 00:11:03.835 "raid_level": "concat", 00:11:03.835 "superblock": true, 00:11:03.835 "num_base_bdevs": 4, 00:11:03.835 "num_base_bdevs_discovered": 1, 00:11:03.835 "num_base_bdevs_operational": 4, 00:11:03.835 "base_bdevs_list": [ 00:11:03.835 { 00:11:03.835 "name": "BaseBdev1", 00:11:03.835 "uuid": "e1f74748-cb6c-11ee-af6b-4feeebbbadda", 00:11:03.835 "is_configured": true, 00:11:03.835 "data_offset": 2048, 00:11:03.835 "data_size": 63488 00:11:03.835 }, 00:11:03.835 { 00:11:03.835 "name": "BaseBdev2", 00:11:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.835 "is_configured": false, 00:11:03.835 "data_offset": 0, 00:11:03.835 "data_size": 0 00:11:03.835 }, 00:11:03.835 { 00:11:03.835 "name": "BaseBdev3", 00:11:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.835 "is_configured": false, 00:11:03.835 "data_offset": 0, 00:11:03.835 "data_size": 0 00:11:03.835 }, 00:11:03.835 { 00:11:03.835 "name": "BaseBdev4", 00:11:03.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.835 "is_configured": false, 00:11:03.835 "data_offset": 0, 00:11:03.835 "data_size": 0 00:11:03.835 } 00:11:03.835 ] 00:11:03.835 }' 00:11:03.835 19:11:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:03.835 19:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:04.094 19:11:41 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.352 [2024-02-14 19:11:41.613084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.352 BaseBdev2 00:11:04.352 19:11:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:04.352 19:11:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:11:04.352 19:11:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:04.352 19:11:41 -- common/autotest_common.sh@887 -- # local i 00:11:04.352 19:11:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:04.352 19:11:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:04.352 19:11:41 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:04.611 19:11:41 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.869 [ 00:11:04.869 { 00:11:04.869 "name": "BaseBdev2", 00:11:04.869 "aliases": [ 00:11:04.869 "e2bedddb-cb6c-11ee-af6b-4feeebbbadda" 00:11:04.869 ], 00:11:04.869 "product_name": "Malloc disk", 00:11:04.869 "block_size": 512, 00:11:04.869 "num_blocks": 65536, 00:11:04.869 "uuid": "e2bedddb-cb6c-11ee-af6b-4feeebbbadda", 00:11:04.869 "assigned_rate_limits": { 00:11:04.869 "rw_ios_per_sec": 0, 00:11:04.869 "rw_mbytes_per_sec": 0, 00:11:04.869 "r_mbytes_per_sec": 0, 00:11:04.869 "w_mbytes_per_sec": 0 00:11:04.869 }, 00:11:04.869 "claimed": true, 00:11:04.869 "claim_type": "exclusive_write", 00:11:04.869 "zoned": false, 00:11:04.869 "supported_io_types": { 00:11:04.869 "read": true, 00:11:04.869 "write": true, 00:11:04.869 "unmap": true, 00:11:04.869 "write_zeroes": true, 00:11:04.869 "flush": true, 00:11:04.869 "reset": true, 00:11:04.869 "compare": false, 00:11:04.869 "compare_and_write": false, 00:11:04.869 "abort": true, 00:11:04.869 "nvme_admin": false, 00:11:04.869 "nvme_io": false 00:11:04.869 }, 00:11:04.869 "memory_domains": [ 00:11:04.869 { 00:11:04.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.869 "dma_device_type": 2 00:11:04.869 } 00:11:04.869 ], 00:11:04.869 "driver_specific": {} 00:11:04.869 } 00:11:04.869 ] 00:11:04.869 19:11:42 -- common/autotest_common.sh@893 -- # return 0 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.869 19:11:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.128 19:11:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:05.128 "name": "Existed_Raid", 00:11:05.128 "uuid": "e255e163-cb6c-11ee-af6b-4feeebbbadda", 00:11:05.128 "strip_size_kb": 64, 00:11:05.128 "state": "configuring", 00:11:05.128 "raid_level": "concat", 00:11:05.128 "superblock": true, 00:11:05.128 "num_base_bdevs": 4, 00:11:05.128 "num_base_bdevs_discovered": 2, 00:11:05.128 "num_base_bdevs_operational": 4, 00:11:05.128 "base_bdevs_list": [ 00:11:05.128 { 00:11:05.128 "name": "BaseBdev1", 00:11:05.128 "uuid": "e1f74748-cb6c-11ee-af6b-4feeebbbadda", 00:11:05.128 "is_configured": true, 00:11:05.128 "data_offset": 2048, 00:11:05.128 "data_size": 63488 00:11:05.128 }, 00:11:05.128 { 00:11:05.128 "name": "BaseBdev2", 00:11:05.128 "uuid": "e2bedddb-cb6c-11ee-af6b-4feeebbbadda", 00:11:05.128 "is_configured": true, 00:11:05.128 "data_offset": 2048, 00:11:05.128 "data_size": 63488 00:11:05.128 }, 00:11:05.128 { 00:11:05.128 "name": "BaseBdev3", 00:11:05.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.128 "is_configured": false, 00:11:05.128 "data_offset": 0, 00:11:05.128 "data_size": 0 00:11:05.128 }, 00:11:05.128 { 00:11:05.128 "name": "BaseBdev4", 00:11:05.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.128 "is_configured": false, 00:11:05.128 "data_offset": 0, 00:11:05.128 "data_size": 0 00:11:05.128 } 00:11:05.128 ] 00:11:05.128 }' 00:11:05.128 19:11:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:05.128 19:11:42 -- common/autotest_common.sh@10 -- # set +x 00:11:05.386 19:11:42 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.644 [2024-02-14 19:11:42.853099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.644 BaseBdev3 00:11:05.644 19:11:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:05.644 19:11:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:11:05.644 19:11:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:05.644 19:11:42 -- common/autotest_common.sh@887 -- # local i 00:11:05.644 19:11:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:05.644 19:11:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:05.644 19:11:42 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:05.902 19:11:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.161 [ 00:11:06.161 { 00:11:06.161 "name": "BaseBdev3", 00:11:06.161 "aliases": [ 00:11:06.161 "e37c148b-cb6c-11ee-af6b-4feeebbbadda" 00:11:06.161 ], 00:11:06.161 "product_name": "Malloc disk", 00:11:06.161 "block_size": 512, 00:11:06.161 "num_blocks": 65536, 00:11:06.161 "uuid": "e37c148b-cb6c-11ee-af6b-4feeebbbadda", 00:11:06.161 "assigned_rate_limits": { 00:11:06.161 "rw_ios_per_sec": 0, 00:11:06.161 "rw_mbytes_per_sec": 0, 00:11:06.161 "r_mbytes_per_sec": 0, 00:11:06.161 "w_mbytes_per_sec": 0 00:11:06.161 }, 00:11:06.161 "claimed": true, 00:11:06.161 "claim_type": "exclusive_write", 00:11:06.161 "zoned": false, 00:11:06.161 "supported_io_types": { 00:11:06.161 "read": true, 00:11:06.161 "write": true, 00:11:06.161 "unmap": true, 00:11:06.161 "write_zeroes": true, 00:11:06.161 "flush": true, 00:11:06.161 "reset": true, 00:11:06.161 "compare": false, 00:11:06.161 "compare_and_write": false, 00:11:06.161 "abort": true, 00:11:06.161 "nvme_admin": false, 00:11:06.161 "nvme_io": false 00:11:06.161 }, 00:11:06.161 "memory_domains": [ 00:11:06.161 { 00:11:06.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.161 "dma_device_type": 2 00:11:06.161 } 00:11:06.161 ], 00:11:06.161 "driver_specific": {} 00:11:06.161 } 00:11:06.161 ] 00:11:06.161 19:11:43 -- common/autotest_common.sh@893 -- # return 0 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:06.161 "name": "Existed_Raid", 00:11:06.161 "uuid": "e255e163-cb6c-11ee-af6b-4feeebbbadda", 00:11:06.161 "strip_size_kb": 64, 00:11:06.161 "state": "configuring", 00:11:06.161 "raid_level": "concat", 00:11:06.161 "superblock": true, 00:11:06.161 "num_base_bdevs": 4, 00:11:06.161 "num_base_bdevs_discovered": 3, 00:11:06.161 "num_base_bdevs_operational": 4, 00:11:06.161 "base_bdevs_list": [ 00:11:06.161 { 00:11:06.161 "name": "BaseBdev1", 00:11:06.161 "uuid": "e1f74748-cb6c-11ee-af6b-4feeebbbadda", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 2048, 00:11:06.161 "data_size": 63488 00:11:06.161 }, 00:11:06.161 { 00:11:06.161 "name": "BaseBdev2", 00:11:06.161 "uuid": "e2bedddb-cb6c-11ee-af6b-4feeebbbadda", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 2048, 00:11:06.161 "data_size": 63488 00:11:06.161 }, 00:11:06.161 { 00:11:06.161 "name": "BaseBdev3", 00:11:06.161 "uuid": "e37c148b-cb6c-11ee-af6b-4feeebbbadda", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 2048, 00:11:06.161 "data_size": 63488 00:11:06.161 }, 00:11:06.161 { 00:11:06.161 "name": "BaseBdev4", 00:11:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.161 "is_configured": false, 00:11:06.161 "data_offset": 0, 00:11:06.161 "data_size": 0 00:11:06.161 } 00:11:06.161 ] 00:11:06.161 }' 00:11:06.161 19:11:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:06.161 19:11:43 -- common/autotest_common.sh@10 -- # set +x 00:11:06.419 19:11:43 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.677 [2024-02-14 19:11:44.065136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.677 [2024-02-14 19:11:44.065186] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b5b9a00 00:11:06.677 [2024-02-14 19:11:44.065190] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.677 [2024-02-14 19:11:44.065206] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b61cec0 00:11:06.677 [2024-02-14 19:11:44.065236] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b5b9a00 00:11:06.677 [2024-02-14 19:11:44.065239] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b5b9a00 00:11:06.677 [2024-02-14 19:11:44.065253] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.677 BaseBdev4 00:11:06.677 19:11:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:11:06.677 19:11:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:11:06.677 19:11:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:06.677 19:11:44 -- common/autotest_common.sh@887 -- # local i 00:11:06.677 19:11:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:06.677 19:11:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:06.677 19:11:44 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:06.936 19:11:44 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.195 [ 00:11:07.195 { 00:11:07.195 "name": "BaseBdev4", 00:11:07.195 "aliases": [ 00:11:07.195 "e43505b7-cb6c-11ee-af6b-4feeebbbadda" 00:11:07.195 ], 00:11:07.195 "product_name": "Malloc disk", 00:11:07.195 "block_size": 512, 00:11:07.195 "num_blocks": 65536, 00:11:07.195 "uuid": "e43505b7-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.195 "assigned_rate_limits": { 00:11:07.195 "rw_ios_per_sec": 0, 00:11:07.195 "rw_mbytes_per_sec": 0, 00:11:07.195 "r_mbytes_per_sec": 0, 00:11:07.195 "w_mbytes_per_sec": 0 00:11:07.195 }, 00:11:07.195 "claimed": true, 00:11:07.195 "claim_type": "exclusive_write", 00:11:07.195 "zoned": false, 00:11:07.195 "supported_io_types": { 00:11:07.195 "read": true, 00:11:07.195 "write": true, 00:11:07.195 "unmap": true, 00:11:07.195 "write_zeroes": true, 00:11:07.195 "flush": true, 00:11:07.195 "reset": true, 00:11:07.195 "compare": false, 00:11:07.195 "compare_and_write": false, 00:11:07.195 "abort": true, 00:11:07.195 "nvme_admin": false, 00:11:07.195 "nvme_io": false 00:11:07.195 }, 00:11:07.195 "memory_domains": [ 00:11:07.195 { 00:11:07.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.195 "dma_device_type": 2 00:11:07.195 } 00:11:07.195 ], 00:11:07.195 "driver_specific": {} 00:11:07.195 } 00:11:07.195 ] 00:11:07.195 19:11:44 -- common/autotest_common.sh@893 -- # return 0 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.195 19:11:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.454 19:11:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:07.454 "name": "Existed_Raid", 00:11:07.454 "uuid": "e255e163-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.454 "strip_size_kb": 64, 00:11:07.454 "state": "online", 00:11:07.454 "raid_level": "concat", 00:11:07.454 "superblock": true, 00:11:07.454 "num_base_bdevs": 4, 00:11:07.454 "num_base_bdevs_discovered": 4, 00:11:07.454 "num_base_bdevs_operational": 4, 00:11:07.454 "base_bdevs_list": [ 00:11:07.454 { 00:11:07.454 "name": "BaseBdev1", 00:11:07.454 "uuid": "e1f74748-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.454 "is_configured": true, 00:11:07.454 "data_offset": 2048, 00:11:07.454 "data_size": 63488 00:11:07.454 }, 00:11:07.454 { 00:11:07.454 "name": "BaseBdev2", 00:11:07.454 "uuid": "e2bedddb-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.454 "is_configured": true, 00:11:07.454 "data_offset": 2048, 00:11:07.454 "data_size": 63488 00:11:07.454 }, 00:11:07.454 { 00:11:07.454 "name": "BaseBdev3", 00:11:07.454 "uuid": "e37c148b-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.454 "is_configured": true, 00:11:07.454 "data_offset": 2048, 00:11:07.454 "data_size": 63488 00:11:07.454 }, 00:11:07.454 { 00:11:07.454 "name": "BaseBdev4", 00:11:07.454 "uuid": "e43505b7-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.454 "is_configured": true, 00:11:07.454 "data_offset": 2048, 00:11:07.454 "data_size": 63488 00:11:07.454 } 00:11:07.454 ] 00:11:07.454 }' 00:11:07.454 19:11:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:07.454 19:11:44 -- common/autotest_common.sh@10 -- # set +x 00:11:07.713 19:11:44 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:07.972 [2024-02-14 19:11:45.165092] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.972 [2024-02-14 19:11:45.165111] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.972 [2024-02-14 19:11:45.165119] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:07.972 "name": "Existed_Raid", 00:11:07.972 "uuid": "e255e163-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.972 "strip_size_kb": 64, 00:11:07.972 "state": "offline", 00:11:07.972 "raid_level": "concat", 00:11:07.972 "superblock": true, 00:11:07.972 "num_base_bdevs": 4, 00:11:07.972 "num_base_bdevs_discovered": 3, 00:11:07.972 "num_base_bdevs_operational": 3, 00:11:07.972 "base_bdevs_list": [ 00:11:07.972 { 00:11:07.972 "name": null, 00:11:07.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.972 "is_configured": false, 00:11:07.972 "data_offset": 2048, 00:11:07.972 "data_size": 63488 00:11:07.972 }, 00:11:07.972 { 00:11:07.972 "name": "BaseBdev2", 00:11:07.972 "uuid": "e2bedddb-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.972 "is_configured": true, 00:11:07.972 "data_offset": 2048, 00:11:07.972 "data_size": 63488 00:11:07.972 }, 00:11:07.972 { 00:11:07.972 "name": "BaseBdev3", 00:11:07.972 "uuid": "e37c148b-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.972 "is_configured": true, 00:11:07.972 "data_offset": 2048, 00:11:07.972 "data_size": 63488 00:11:07.972 }, 00:11:07.972 { 00:11:07.972 "name": "BaseBdev4", 00:11:07.972 "uuid": "e43505b7-cb6c-11ee-af6b-4feeebbbadda", 00:11:07.972 "is_configured": true, 00:11:07.972 "data_offset": 2048, 00:11:07.972 "data_size": 63488 00:11:07.972 } 00:11:07.972 ] 00:11:07.972 }' 00:11:07.972 19:11:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:07.972 19:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:08.231 19:11:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:08.231 19:11:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:08.231 19:11:45 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.231 19:11:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:08.490 19:11:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:08.490 19:11:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.490 19:11:45 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:08.750 [2024-02-14 19:11:45.921755] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.750 19:11:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:08.750 19:11:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:08.750 19:11:45 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.750 19:11:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:08.750 19:11:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:08.750 19:11:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.750 19:11:46 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:09.009 [2024-02-14 19:11:46.274334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.009 19:11:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:09.009 19:11:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:09.009 19:11:46 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.009 19:11:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:09.268 19:11:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:09.268 19:11:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.268 19:11:46 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:11:09.268 [2024-02-14 19:11:46.678957] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:09.268 [2024-02-14 19:11:46.678976] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b5b9a00 name Existed_Raid, state offline 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:09.527 19:11:46 -- bdev/bdev_raid.sh@287 -- # killprocess 53574 00:11:09.527 19:11:46 -- common/autotest_common.sh@924 -- # '[' -z 53574 ']' 00:11:09.527 19:11:46 -- common/autotest_common.sh@928 -- # kill -0 53574 00:11:09.527 19:11:46 -- common/autotest_common.sh@929 -- # uname 00:11:09.527 19:11:46 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:11:09.527 19:11:46 -- common/autotest_common.sh@932 -- # ps -c -o command 53574 00:11:09.527 19:11:46 -- common/autotest_common.sh@932 -- # tail -1 00:11:09.527 19:11:46 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:11:09.527 19:11:46 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:11:09.527 19:11:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 53574' 00:11:09.527 killing process with pid 53574 00:11:09.527 19:11:46 -- common/autotest_common.sh@943 -- # kill 53574 00:11:09.527 [2024-02-14 19:11:46.871551] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.527 [2024-02-14 19:11:46.871584] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.527 19:11:46 -- common/autotest_common.sh@948 -- # wait 53574 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:09.786 00:11:09.786 real 0m10.649s 00:11:09.786 user 0m18.807s 00:11:09.786 sys 0m1.752s 00:11:09.786 19:11:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:09.786 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:09.786 ************************************ 00:11:09.786 END TEST raid_state_function_test_sb 00:11:09.786 ************************************ 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:09.786 19:11:47 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:11:09.786 19:11:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:09.786 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:09.786 ************************************ 00:11:09.786 START TEST raid_superblock_test 00:11:09.786 ************************************ 00:11:09.786 19:11:47 -- common/autotest_common.sh@1102 -- # raid_superblock_test concat 4 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@357 -- # raid_pid=53847 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53847 /var/tmp/spdk-raid.sock 00:11:09.786 19:11:47 -- common/autotest_common.sh@817 -- # '[' -z 53847 ']' 00:11:09.786 19:11:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:09.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:09.786 19:11:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:09.786 19:11:47 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:09.786 19:11:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:09.786 19:11:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:09.786 19:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:09.786 [2024-02-14 19:11:47.060088] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:09.786 [2024-02-14 19:11:47.060249] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:10.410 EAL: TSC is not safe to use in SMP mode 00:11:10.410 EAL: TSC is not invariant 00:11:10.410 [2024-02-14 19:11:47.512849] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.410 [2024-02-14 19:11:47.589926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.410 [2024-02-14 19:11:47.590383] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.410 [2024-02-14 19:11:47.590388] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.670 19:11:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:10.670 19:11:47 -- common/autotest_common.sh@850 -- # return 0 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.670 19:11:47 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:10.929 malloc1 00:11:10.929 19:11:48 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:11.187 [2024-02-14 19:11:48.416690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:11.187 [2024-02-14 19:11:48.416750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.187 [2024-02-14 19:11:48.417231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c5780 00:11:11.187 [2024-02-14 19:11:48.417253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.187 [2024-02-14 19:11:48.417937] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.187 [2024-02-14 19:11:48.417967] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:11.187 pt1 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.187 19:11:48 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:11.187 malloc2 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.446 [2024-02-14 19:11:48.816697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.446 [2024-02-14 19:11:48.816736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.446 [2024-02-14 19:11:48.816757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c5c80 00:11:11.446 [2024-02-14 19:11:48.816764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.446 [2024-02-14 19:11:48.817190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.446 [2024-02-14 19:11:48.817210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.446 pt2 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.446 19:11:48 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:11.705 malloc3 00:11:11.705 19:11:49 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.965 [2024-02-14 19:11:49.184702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.965 [2024-02-14 19:11:49.184737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.965 [2024-02-14 19:11:49.184759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c6180 00:11:11.965 [2024-02-14 19:11:49.184765] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.965 [2024-02-14 19:11:49.185154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.965 [2024-02-14 19:11:49.185173] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.965 pt3 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:11:11.965 malloc4 00:11:11.965 19:11:49 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:12.224 [2024-02-14 19:11:49.520710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:12.224 [2024-02-14 19:11:49.520748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.224 [2024-02-14 19:11:49.520770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c6680 00:11:12.224 [2024-02-14 19:11:49.520777] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.224 [2024-02-14 19:11:49.521179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.224 [2024-02-14 19:11:49.521210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:12.224 pt4 00:11:12.224 19:11:49 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:12.224 19:11:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:12.224 19:11:49 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:11:12.483 [2024-02-14 19:11:49.700719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:12.483 [2024-02-14 19:11:49.701120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:12.483 [2024-02-14 19:11:49.701132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:12.483 [2024-02-14 19:11:49.701141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:12.483 [2024-02-14 19:11:49.701182] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x8295c6900 00:11:12.483 [2024-02-14 19:11:49.701186] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.483 [2024-02-14 19:11:49.701212] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829628e20 00:11:12.483 [2024-02-14 19:11:49.701262] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8295c6900 00:11:12.483 [2024-02-14 19:11:49.701265] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8295c6900 00:11:12.483 [2024-02-14 19:11:49.701284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.483 19:11:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.742 19:11:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:12.742 "name": "raid_bdev1", 00:11:12.742 "uuid": "e790f434-cb6c-11ee-af6b-4feeebbbadda", 00:11:12.742 "strip_size_kb": 64, 00:11:12.742 "state": "online", 00:11:12.742 "raid_level": "concat", 00:11:12.742 "superblock": true, 00:11:12.742 "num_base_bdevs": 4, 00:11:12.742 "num_base_bdevs_discovered": 4, 00:11:12.743 "num_base_bdevs_operational": 4, 00:11:12.743 "base_bdevs_list": [ 00:11:12.743 { 00:11:12.743 "name": "pt1", 00:11:12.743 "uuid": "e52f1bfb-6370-f050-b674-c8210aaf0aff", 00:11:12.743 "is_configured": true, 00:11:12.743 "data_offset": 2048, 00:11:12.743 "data_size": 63488 00:11:12.743 }, 00:11:12.743 { 00:11:12.743 "name": "pt2", 00:11:12.743 "uuid": "5ba2fb80-fab9-615c-b5fe-e644a58b1d3d", 00:11:12.743 "is_configured": true, 00:11:12.743 "data_offset": 2048, 00:11:12.743 "data_size": 63488 00:11:12.743 }, 00:11:12.743 { 00:11:12.743 "name": "pt3", 00:11:12.743 "uuid": "22e314b4-f4a4-6652-872e-2798f8349deb", 00:11:12.743 "is_configured": true, 00:11:12.743 "data_offset": 2048, 00:11:12.743 "data_size": 63488 00:11:12.743 }, 00:11:12.743 { 00:11:12.743 "name": "pt4", 00:11:12.743 "uuid": "b203d427-3ee3-c952-b241-207e258fca9a", 00:11:12.743 "is_configured": true, 00:11:12.743 "data_offset": 2048, 00:11:12.743 "data_size": 63488 00:11:12.743 } 00:11:12.743 ] 00:11:12.743 }' 00:11:12.743 19:11:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:12.743 19:11:49 -- common/autotest_common.sh@10 -- # set +x 00:11:13.001 19:11:50 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:13.001 19:11:50 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:13.259 [2024-02-14 19:11:50.464743] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.259 19:11:50 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e790f434-cb6c-11ee-af6b-4feeebbbadda 00:11:13.260 19:11:50 -- bdev/bdev_raid.sh@380 -- # '[' -z e790f434-cb6c-11ee-af6b-4feeebbbadda ']' 00:11:13.260 19:11:50 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:13.260 [2024-02-14 19:11:50.636718] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.260 [2024-02-14 19:11:50.636730] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.260 [2024-02-14 19:11:50.636740] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.260 [2024-02-14 19:11:50.636765] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.260 [2024-02-14 19:11:50.636768] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8295c6900 name raid_bdev1, state offline 00:11:13.260 19:11:50 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.260 19:11:50 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:13.519 19:11:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:13.519 19:11:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:13.519 19:11:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.519 19:11:50 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:13.779 19:11:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:13.779 19:11:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:14.038 19:11:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.038 19:11:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:14.038 19:11:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.038 19:11:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:11:14.297 19:11:51 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:14.297 19:11:51 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:14.297 19:11:51 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:14.297 19:11:51 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:14.297 19:11:51 -- common/autotest_common.sh@638 -- # local es=0 00:11:14.297 19:11:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:14.297 19:11:51 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.297 19:11:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:14.297 19:11:51 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.297 19:11:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:14.297 19:11:51 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.297 19:11:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:14.297 19:11:51 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.297 19:11:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:14.297 19:11:51 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:14.556 [2024-02-14 19:11:51.924752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:14.556 [2024-02-14 19:11:51.925203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:14.556 [2024-02-14 19:11:51.925212] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:14.556 [2024-02-14 19:11:51.925218] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:14.556 [2024-02-14 19:11:51.925228] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:14.556 [2024-02-14 19:11:51.925255] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:14.556 [2024-02-14 19:11:51.925264] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:11:14.556 [2024-02-14 19:11:51.925271] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:11:14.556 [2024-02-14 19:11:51.925277] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:14.556 [2024-02-14 19:11:51.925281] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8295c6680 name raid_bdev1, state configuring 00:11:14.556 request: 00:11:14.556 { 00:11:14.556 "name": "raid_bdev1", 00:11:14.556 "raid_level": "concat", 00:11:14.556 "base_bdevs": [ 00:11:14.556 "malloc1", 00:11:14.556 "malloc2", 00:11:14.556 "malloc3", 00:11:14.556 "malloc4" 00:11:14.556 ], 00:11:14.556 "superblock": false, 00:11:14.556 "strip_size_kb": 64, 00:11:14.556 "method": "bdev_raid_create", 00:11:14.556 "req_id": 1 00:11:14.556 } 00:11:14.556 Got JSON-RPC error response 00:11:14.556 response: 00:11:14.556 { 00:11:14.556 "code": -17, 00:11:14.556 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:14.557 } 00:11:14.557 19:11:51 -- common/autotest_common.sh@641 -- # es=1 00:11:14.557 19:11:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:14.557 19:11:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:14.557 19:11:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:14.557 19:11:51 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.557 19:11:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:14.815 19:11:52 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:14.815 19:11:52 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:14.815 19:11:52 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.074 [2024-02-14 19:11:52.252754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.074 [2024-02-14 19:11:52.252786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.074 [2024-02-14 19:11:52.252824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c6180 00:11:15.074 [2024-02-14 19:11:52.252830] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.074 [2024-02-14 19:11:52.253301] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.074 [2024-02-14 19:11:52.253321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.074 [2024-02-14 19:11:52.253337] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:15.074 [2024-02-14 19:11:52.253346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:15.074 pt1 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.074 19:11:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.333 19:11:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:15.333 "name": "raid_bdev1", 00:11:15.333 "uuid": "e790f434-cb6c-11ee-af6b-4feeebbbadda", 00:11:15.333 "strip_size_kb": 64, 00:11:15.333 "state": "configuring", 00:11:15.333 "raid_level": "concat", 00:11:15.333 "superblock": true, 00:11:15.333 "num_base_bdevs": 4, 00:11:15.333 "num_base_bdevs_discovered": 1, 00:11:15.333 "num_base_bdevs_operational": 4, 00:11:15.333 "base_bdevs_list": [ 00:11:15.333 { 00:11:15.333 "name": "pt1", 00:11:15.333 "uuid": "e52f1bfb-6370-f050-b674-c8210aaf0aff", 00:11:15.333 "is_configured": true, 00:11:15.333 "data_offset": 2048, 00:11:15.333 "data_size": 63488 00:11:15.333 }, 00:11:15.333 { 00:11:15.333 "name": null, 00:11:15.333 "uuid": "5ba2fb80-fab9-615c-b5fe-e644a58b1d3d", 00:11:15.333 "is_configured": false, 00:11:15.333 "data_offset": 2048, 00:11:15.333 "data_size": 63488 00:11:15.333 }, 00:11:15.333 { 00:11:15.333 "name": null, 00:11:15.333 "uuid": "22e314b4-f4a4-6652-872e-2798f8349deb", 00:11:15.333 "is_configured": false, 00:11:15.333 "data_offset": 2048, 00:11:15.333 "data_size": 63488 00:11:15.333 }, 00:11:15.333 { 00:11:15.333 "name": null, 00:11:15.333 "uuid": "b203d427-3ee3-c952-b241-207e258fca9a", 00:11:15.333 "is_configured": false, 00:11:15.333 "data_offset": 2048, 00:11:15.333 "data_size": 63488 00:11:15.333 } 00:11:15.333 ] 00:11:15.333 }' 00:11:15.333 19:11:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:15.333 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:11:15.592 19:11:52 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:11:15.592 19:11:52 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.592 [2024-02-14 19:11:52.964774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.592 [2024-02-14 19:11:52.964804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.592 [2024-02-14 19:11:52.964840] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c5780 00:11:15.592 [2024-02-14 19:11:52.964846] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.592 [2024-02-14 19:11:52.964908] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.592 [2024-02-14 19:11:52.964915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.592 [2024-02-14 19:11:52.964927] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:15.592 [2024-02-14 19:11:52.964933] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.592 pt2 00:11:15.592 19:11:52 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:15.851 [2024-02-14 19:11:53.140778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.851 19:11:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.109 19:11:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:16.109 "name": "raid_bdev1", 00:11:16.109 "uuid": "e790f434-cb6c-11ee-af6b-4feeebbbadda", 00:11:16.109 "strip_size_kb": 64, 00:11:16.109 "state": "configuring", 00:11:16.109 "raid_level": "concat", 00:11:16.109 "superblock": true, 00:11:16.109 "num_base_bdevs": 4, 00:11:16.109 "num_base_bdevs_discovered": 1, 00:11:16.109 "num_base_bdevs_operational": 4, 00:11:16.109 "base_bdevs_list": [ 00:11:16.109 { 00:11:16.109 "name": "pt1", 00:11:16.109 "uuid": "e52f1bfb-6370-f050-b674-c8210aaf0aff", 00:11:16.109 "is_configured": true, 00:11:16.109 "data_offset": 2048, 00:11:16.109 "data_size": 63488 00:11:16.109 }, 00:11:16.109 { 00:11:16.109 "name": null, 00:11:16.109 "uuid": "5ba2fb80-fab9-615c-b5fe-e644a58b1d3d", 00:11:16.109 "is_configured": false, 00:11:16.109 "data_offset": 2048, 00:11:16.109 "data_size": 63488 00:11:16.109 }, 00:11:16.109 { 00:11:16.109 "name": null, 00:11:16.109 "uuid": "22e314b4-f4a4-6652-872e-2798f8349deb", 00:11:16.109 "is_configured": false, 00:11:16.109 "data_offset": 2048, 00:11:16.109 "data_size": 63488 00:11:16.109 }, 00:11:16.109 { 00:11:16.109 "name": null, 00:11:16.109 "uuid": "b203d427-3ee3-c952-b241-207e258fca9a", 00:11:16.109 "is_configured": false, 00:11:16.109 "data_offset": 2048, 00:11:16.109 "data_size": 63488 00:11:16.109 } 00:11:16.109 ] 00:11:16.109 }' 00:11:16.109 19:11:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:16.109 19:11:53 -- common/autotest_common.sh@10 -- # set +x 00:11:16.368 19:11:53 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:16.368 19:11:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:16.368 19:11:53 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.368 [2024-02-14 19:11:53.752795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.368 [2024-02-14 19:11:53.752836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.368 [2024-02-14 19:11:53.752856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c5780 00:11:16.368 [2024-02-14 19:11:53.752862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.368 [2024-02-14 19:11:53.752935] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.368 [2024-02-14 19:11:53.752942] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.368 [2024-02-14 19:11:53.752958] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:16.368 [2024-02-14 19:11:53.752964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.368 pt2 00:11:16.368 19:11:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:16.368 19:11:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:16.368 19:11:53 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:16.627 [2024-02-14 19:11:53.968799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:16.627 [2024-02-14 19:11:53.968833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.627 [2024-02-14 19:11:53.968849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c6b80 00:11:16.627 [2024-02-14 19:11:53.968855] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.627 [2024-02-14 19:11:53.968932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.627 [2024-02-14 19:11:53.968939] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:16.627 [2024-02-14 19:11:53.968952] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:16.627 [2024-02-14 19:11:53.968958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:16.627 pt3 00:11:16.627 19:11:53 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:16.627 19:11:53 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:16.627 19:11:53 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:16.886 [2024-02-14 19:11:54.212803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:16.886 [2024-02-14 19:11:54.212839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.886 [2024-02-14 19:11:54.212854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8295c6900 00:11:16.886 [2024-02-14 19:11:54.212860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.886 [2024-02-14 19:11:54.212928] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.886 [2024-02-14 19:11:54.212934] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:16.886 [2024-02-14 19:11:54.212949] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:11:16.886 [2024-02-14 19:11:54.212956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:16.886 [2024-02-14 19:11:54.212976] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x8295c5c80 00:11:16.886 [2024-02-14 19:11:54.212979] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:16.886 [2024-02-14 19:11:54.212994] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829628e20 00:11:16.886 [2024-02-14 19:11:54.213029] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8295c5c80 00:11:16.886 [2024-02-14 19:11:54.213032] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x8295c5c80 00:11:16.886 [2024-02-14 19:11:54.213046] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.886 pt4 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.886 19:11:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.145 19:11:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:17.145 "name": "raid_bdev1", 00:11:17.145 "uuid": "e790f434-cb6c-11ee-af6b-4feeebbbadda", 00:11:17.145 "strip_size_kb": 64, 00:11:17.145 "state": "online", 00:11:17.145 "raid_level": "concat", 00:11:17.145 "superblock": true, 00:11:17.145 "num_base_bdevs": 4, 00:11:17.145 "num_base_bdevs_discovered": 4, 00:11:17.145 "num_base_bdevs_operational": 4, 00:11:17.145 "base_bdevs_list": [ 00:11:17.145 { 00:11:17.145 "name": "pt1", 00:11:17.145 "uuid": "e52f1bfb-6370-f050-b674-c8210aaf0aff", 00:11:17.145 "is_configured": true, 00:11:17.145 "data_offset": 2048, 00:11:17.145 "data_size": 63488 00:11:17.145 }, 00:11:17.145 { 00:11:17.145 "name": "pt2", 00:11:17.145 "uuid": "5ba2fb80-fab9-615c-b5fe-e644a58b1d3d", 00:11:17.145 "is_configured": true, 00:11:17.145 "data_offset": 2048, 00:11:17.145 "data_size": 63488 00:11:17.145 }, 00:11:17.145 { 00:11:17.145 "name": "pt3", 00:11:17.145 "uuid": "22e314b4-f4a4-6652-872e-2798f8349deb", 00:11:17.145 "is_configured": true, 00:11:17.145 "data_offset": 2048, 00:11:17.145 "data_size": 63488 00:11:17.145 }, 00:11:17.145 { 00:11:17.145 "name": "pt4", 00:11:17.145 "uuid": "b203d427-3ee3-c952-b241-207e258fca9a", 00:11:17.145 "is_configured": true, 00:11:17.145 "data_offset": 2048, 00:11:17.145 "data_size": 63488 00:11:17.145 } 00:11:17.145 ] 00:11:17.145 }' 00:11:17.145 19:11:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:17.145 19:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:17.403 19:11:54 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:17.403 19:11:54 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:17.662 [2024-02-14 19:11:54.892844] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.662 19:11:54 -- bdev/bdev_raid.sh@430 -- # '[' e790f434-cb6c-11ee-af6b-4feeebbbadda '!=' e790f434-cb6c-11ee-af6b-4feeebbbadda ']' 00:11:17.662 19:11:54 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:11:17.662 19:11:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:17.662 19:11:54 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:17.662 19:11:54 -- bdev/bdev_raid.sh@511 -- # killprocess 53847 00:11:17.662 19:11:54 -- common/autotest_common.sh@924 -- # '[' -z 53847 ']' 00:11:17.662 19:11:54 -- common/autotest_common.sh@928 -- # kill -0 53847 00:11:17.662 19:11:54 -- common/autotest_common.sh@929 -- # uname 00:11:17.662 19:11:54 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:11:17.662 19:11:54 -- common/autotest_common.sh@932 -- # ps -c -o command 53847 00:11:17.662 19:11:54 -- common/autotest_common.sh@932 -- # tail -1 00:11:17.662 19:11:54 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:11:17.662 19:11:54 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:11:17.662 19:11:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 53847' 00:11:17.662 killing process with pid 53847 00:11:17.662 19:11:54 -- common/autotest_common.sh@943 -- # kill 53847 00:11:17.662 [2024-02-14 19:11:54.920785] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.662 [2024-02-14 19:11:54.920804] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.662 [2024-02-14 19:11:54.920817] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.662 [2024-02-14 19:11:54.920831] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8295c5c80 name raid_bdev1, state offline 00:11:17.662 19:11:54 -- common/autotest_common.sh@948 -- # wait 53847 00:11:17.662 [2024-02-14 19:11:54.939418] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.662 19:11:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:17.662 00:11:17.662 real 0m8.023s 00:11:17.662 user 0m13.905s 00:11:17.662 sys 0m1.351s 00:11:17.662 19:11:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:17.662 19:11:55 -- common/autotest_common.sh@10 -- # set +x 00:11:17.662 ************************************ 00:11:17.662 END TEST raid_superblock_test 00:11:17.662 ************************************ 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:17.921 19:11:55 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:11:17.921 19:11:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:17.921 19:11:55 -- common/autotest_common.sh@10 -- # set +x 00:11:17.921 ************************************ 00:11:17.921 START TEST raid_state_function_test 00:11:17.921 ************************************ 00:11:17.921 19:11:55 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 4 false 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=54032 00:11:17.921 Process raid pid: 54032 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 54032' 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 54032 /var/tmp/spdk-raid.sock 00:11:17.921 19:11:55 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:17.921 19:11:55 -- common/autotest_common.sh@817 -- # '[' -z 54032 ']' 00:11:17.921 19:11:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:17.921 19:11:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:17.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:17.921 19:11:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:17.921 19:11:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:17.921 19:11:55 -- common/autotest_common.sh@10 -- # set +x 00:11:17.921 [2024-02-14 19:11:55.128562] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:17.921 [2024-02-14 19:11:55.128729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:18.182 EAL: TSC is not safe to use in SMP mode 00:11:18.182 EAL: TSC is not invariant 00:11:18.182 [2024-02-14 19:11:55.588170] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.441 [2024-02-14 19:11:55.667908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.442 [2024-02-14 19:11:55.668315] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.442 [2024-02-14 19:11:55.668319] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.009 19:11:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:19.009 19:11:56 -- common/autotest_common.sh@850 -- # return 0 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:19.009 [2024-02-14 19:11:56.278646] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.009 [2024-02-14 19:11:56.278691] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.009 [2024-02-14 19:11:56.278695] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.009 [2024-02-14 19:11:56.278703] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.009 [2024-02-14 19:11:56.278706] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.009 [2024-02-14 19:11:56.278712] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.009 [2024-02-14 19:11:56.278715] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.009 [2024-02-14 19:11:56.278722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.009 19:11:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.268 19:11:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:19.268 "name": "Existed_Raid", 00:11:19.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.268 "strip_size_kb": 0, 00:11:19.268 "state": "configuring", 00:11:19.268 "raid_level": "raid1", 00:11:19.268 "superblock": false, 00:11:19.268 "num_base_bdevs": 4, 00:11:19.268 "num_base_bdevs_discovered": 0, 00:11:19.268 "num_base_bdevs_operational": 4, 00:11:19.268 "base_bdevs_list": [ 00:11:19.268 { 00:11:19.268 "name": "BaseBdev1", 00:11:19.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.268 "is_configured": false, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 0 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "name": "BaseBdev2", 00:11:19.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.268 "is_configured": false, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 0 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "name": "BaseBdev3", 00:11:19.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.268 "is_configured": false, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 0 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "name": "BaseBdev4", 00:11:19.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.268 "is_configured": false, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 0 00:11:19.268 } 00:11:19.268 ] 00:11:19.268 }' 00:11:19.268 19:11:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:19.268 19:11:56 -- common/autotest_common.sh@10 -- # set +x 00:11:19.527 19:11:56 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:19.527 [2024-02-14 19:11:56.914651] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.527 [2024-02-14 19:11:56.914669] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aba0500 name Existed_Raid, state configuring 00:11:19.528 19:11:56 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:19.787 [2024-02-14 19:11:57.150657] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:19.787 [2024-02-14 19:11:57.150689] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:19.787 [2024-02-14 19:11:57.150692] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:19.787 [2024-02-14 19:11:57.150714] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:19.787 [2024-02-14 19:11:57.150716] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:19.787 [2024-02-14 19:11:57.150722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:19.787 [2024-02-14 19:11:57.150724] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:19.787 [2024-02-14 19:11:57.150730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:19.787 19:11:57 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:20.046 [2024-02-14 19:11:57.319480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:20.046 BaseBdev1 00:11:20.046 19:11:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:20.046 19:11:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:11:20.046 19:11:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:20.046 19:11:57 -- common/autotest_common.sh@887 -- # local i 00:11:20.046 19:11:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:20.046 19:11:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:20.046 19:11:57 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:20.305 19:11:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:20.564 [ 00:11:20.564 { 00:11:20.564 "name": "BaseBdev1", 00:11:20.564 "aliases": [ 00:11:20.564 "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda" 00:11:20.564 ], 00:11:20.564 "product_name": "Malloc disk", 00:11:20.564 "block_size": 512, 00:11:20.564 "num_blocks": 65536, 00:11:20.564 "uuid": "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda", 00:11:20.564 "assigned_rate_limits": { 00:11:20.564 "rw_ios_per_sec": 0, 00:11:20.564 "rw_mbytes_per_sec": 0, 00:11:20.564 "r_mbytes_per_sec": 0, 00:11:20.564 "w_mbytes_per_sec": 0 00:11:20.564 }, 00:11:20.564 "claimed": true, 00:11:20.564 "claim_type": "exclusive_write", 00:11:20.564 "zoned": false, 00:11:20.564 "supported_io_types": { 00:11:20.564 "read": true, 00:11:20.564 "write": true, 00:11:20.564 "unmap": true, 00:11:20.564 "write_zeroes": true, 00:11:20.564 "flush": true, 00:11:20.564 "reset": true, 00:11:20.564 "compare": false, 00:11:20.564 "compare_and_write": false, 00:11:20.564 "abort": true, 00:11:20.564 "nvme_admin": false, 00:11:20.564 "nvme_io": false 00:11:20.564 }, 00:11:20.564 "memory_domains": [ 00:11:20.564 { 00:11:20.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.564 "dma_device_type": 2 00:11:20.564 } 00:11:20.564 ], 00:11:20.564 "driver_specific": {} 00:11:20.564 } 00:11:20.564 ] 00:11:20.564 19:11:57 -- common/autotest_common.sh@893 -- # return 0 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:20.564 "name": "Existed_Raid", 00:11:20.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.564 "strip_size_kb": 0, 00:11:20.564 "state": "configuring", 00:11:20.564 "raid_level": "raid1", 00:11:20.564 "superblock": false, 00:11:20.564 "num_base_bdevs": 4, 00:11:20.564 "num_base_bdevs_discovered": 1, 00:11:20.564 "num_base_bdevs_operational": 4, 00:11:20.564 "base_bdevs_list": [ 00:11:20.564 { 00:11:20.564 "name": "BaseBdev1", 00:11:20.564 "uuid": "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda", 00:11:20.564 "is_configured": true, 00:11:20.564 "data_offset": 0, 00:11:20.564 "data_size": 65536 00:11:20.564 }, 00:11:20.564 { 00:11:20.564 "name": "BaseBdev2", 00:11:20.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.564 "is_configured": false, 00:11:20.564 "data_offset": 0, 00:11:20.564 "data_size": 0 00:11:20.564 }, 00:11:20.564 { 00:11:20.564 "name": "BaseBdev3", 00:11:20.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.564 "is_configured": false, 00:11:20.564 "data_offset": 0, 00:11:20.564 "data_size": 0 00:11:20.564 }, 00:11:20.564 { 00:11:20.564 "name": "BaseBdev4", 00:11:20.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.564 "is_configured": false, 00:11:20.564 "data_offset": 0, 00:11:20.564 "data_size": 0 00:11:20.564 } 00:11:20.564 ] 00:11:20.564 }' 00:11:20.564 19:11:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:20.564 19:11:57 -- common/autotest_common.sh@10 -- # set +x 00:11:21.133 19:11:58 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:21.133 [2024-02-14 19:11:58.530689] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.133 [2024-02-14 19:11:58.530709] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aba0500 name Existed_Raid, state configuring 00:11:21.133 19:11:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:21.133 19:11:58 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:21.393 [2024-02-14 19:11:58.778716] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.393 [2024-02-14 19:11:58.779349] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.393 [2024-02-14 19:11:58.779385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.393 [2024-02-14 19:11:58.779389] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.393 [2024-02-14 19:11:58.779395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.393 [2024-02-14 19:11:58.779398] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.393 [2024-02-14 19:11:58.779404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.393 19:11:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.651 19:11:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:21.651 "name": "Existed_Raid", 00:11:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.651 "strip_size_kb": 0, 00:11:21.651 "state": "configuring", 00:11:21.651 "raid_level": "raid1", 00:11:21.651 "superblock": false, 00:11:21.651 "num_base_bdevs": 4, 00:11:21.651 "num_base_bdevs_discovered": 1, 00:11:21.651 "num_base_bdevs_operational": 4, 00:11:21.651 "base_bdevs_list": [ 00:11:21.651 { 00:11:21.651 "name": "BaseBdev1", 00:11:21.651 "uuid": "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda", 00:11:21.651 "is_configured": true, 00:11:21.651 "data_offset": 0, 00:11:21.651 "data_size": 65536 00:11:21.651 }, 00:11:21.651 { 00:11:21.651 "name": "BaseBdev2", 00:11:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.651 "is_configured": false, 00:11:21.651 "data_offset": 0, 00:11:21.651 "data_size": 0 00:11:21.651 }, 00:11:21.651 { 00:11:21.651 "name": "BaseBdev3", 00:11:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.651 "is_configured": false, 00:11:21.651 "data_offset": 0, 00:11:21.651 "data_size": 0 00:11:21.651 }, 00:11:21.651 { 00:11:21.651 "name": "BaseBdev4", 00:11:21.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.651 "is_configured": false, 00:11:21.651 "data_offset": 0, 00:11:21.651 "data_size": 0 00:11:21.651 } 00:11:21.651 ] 00:11:21.651 }' 00:11:21.651 19:11:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:21.651 19:11:59 -- common/autotest_common.sh@10 -- # set +x 00:11:21.910 19:11:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.170 [2024-02-14 19:11:59.526825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.170 BaseBdev2 00:11:22.170 19:11:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:22.170 19:11:59 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:11:22.170 19:11:59 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:22.170 19:11:59 -- common/autotest_common.sh@887 -- # local i 00:11:22.170 19:11:59 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:22.170 19:11:59 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:22.170 19:11:59 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:22.429 19:11:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.688 [ 00:11:22.688 { 00:11:22.688 "name": "BaseBdev2", 00:11:22.688 "aliases": [ 00:11:22.688 "ed6c4938-cb6c-11ee-af6b-4feeebbbadda" 00:11:22.688 ], 00:11:22.688 "product_name": "Malloc disk", 00:11:22.688 "block_size": 512, 00:11:22.688 "num_blocks": 65536, 00:11:22.688 "uuid": "ed6c4938-cb6c-11ee-af6b-4feeebbbadda", 00:11:22.688 "assigned_rate_limits": { 00:11:22.688 "rw_ios_per_sec": 0, 00:11:22.688 "rw_mbytes_per_sec": 0, 00:11:22.688 "r_mbytes_per_sec": 0, 00:11:22.688 "w_mbytes_per_sec": 0 00:11:22.688 }, 00:11:22.688 "claimed": true, 00:11:22.688 "claim_type": "exclusive_write", 00:11:22.688 "zoned": false, 00:11:22.688 "supported_io_types": { 00:11:22.688 "read": true, 00:11:22.688 "write": true, 00:11:22.688 "unmap": true, 00:11:22.688 "write_zeroes": true, 00:11:22.688 "flush": true, 00:11:22.688 "reset": true, 00:11:22.688 "compare": false, 00:11:22.688 "compare_and_write": false, 00:11:22.688 "abort": true, 00:11:22.688 "nvme_admin": false, 00:11:22.688 "nvme_io": false 00:11:22.688 }, 00:11:22.688 "memory_domains": [ 00:11:22.688 { 00:11:22.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.688 "dma_device_type": 2 00:11:22.688 } 00:11:22.688 ], 00:11:22.688 "driver_specific": {} 00:11:22.688 } 00:11:22.688 ] 00:11:22.688 19:11:59 -- common/autotest_common.sh@893 -- # return 0 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.688 19:11:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.946 19:12:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:22.946 "name": "Existed_Raid", 00:11:22.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.946 "strip_size_kb": 0, 00:11:22.946 "state": "configuring", 00:11:22.946 "raid_level": "raid1", 00:11:22.946 "superblock": false, 00:11:22.946 "num_base_bdevs": 4, 00:11:22.946 "num_base_bdevs_discovered": 2, 00:11:22.946 "num_base_bdevs_operational": 4, 00:11:22.946 "base_bdevs_list": [ 00:11:22.946 { 00:11:22.946 "name": "BaseBdev1", 00:11:22.946 "uuid": "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda", 00:11:22.946 "is_configured": true, 00:11:22.946 "data_offset": 0, 00:11:22.946 "data_size": 65536 00:11:22.946 }, 00:11:22.946 { 00:11:22.946 "name": "BaseBdev2", 00:11:22.946 "uuid": "ed6c4938-cb6c-11ee-af6b-4feeebbbadda", 00:11:22.946 "is_configured": true, 00:11:22.946 "data_offset": 0, 00:11:22.946 "data_size": 65536 00:11:22.946 }, 00:11:22.946 { 00:11:22.946 "name": "BaseBdev3", 00:11:22.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.946 "is_configured": false, 00:11:22.946 "data_offset": 0, 00:11:22.946 "data_size": 0 00:11:22.946 }, 00:11:22.946 { 00:11:22.946 "name": "BaseBdev4", 00:11:22.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.946 "is_configured": false, 00:11:22.946 "data_offset": 0, 00:11:22.946 "data_size": 0 00:11:22.946 } 00:11:22.946 ] 00:11:22.946 }' 00:11:22.946 19:12:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:22.946 19:12:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.205 19:12:00 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.464 [2024-02-14 19:12:00.686855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.464 BaseBdev3 00:11:23.464 19:12:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:23.464 19:12:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:11:23.464 19:12:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:23.464 19:12:00 -- common/autotest_common.sh@887 -- # local i 00:11:23.464 19:12:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:23.464 19:12:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:23.464 19:12:00 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:23.723 19:12:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.983 [ 00:11:23.983 { 00:11:23.983 "name": "BaseBdev3", 00:11:23.983 "aliases": [ 00:11:23.983 "ee1d4bba-cb6c-11ee-af6b-4feeebbbadda" 00:11:23.983 ], 00:11:23.983 "product_name": "Malloc disk", 00:11:23.983 "block_size": 512, 00:11:23.983 "num_blocks": 65536, 00:11:23.983 "uuid": "ee1d4bba-cb6c-11ee-af6b-4feeebbbadda", 00:11:23.983 "assigned_rate_limits": { 00:11:23.983 "rw_ios_per_sec": 0, 00:11:23.983 "rw_mbytes_per_sec": 0, 00:11:23.983 "r_mbytes_per_sec": 0, 00:11:23.983 "w_mbytes_per_sec": 0 00:11:23.983 }, 00:11:23.983 "claimed": true, 00:11:23.983 "claim_type": "exclusive_write", 00:11:23.983 "zoned": false, 00:11:23.983 "supported_io_types": { 00:11:23.983 "read": true, 00:11:23.983 "write": true, 00:11:23.983 "unmap": true, 00:11:23.983 "write_zeroes": true, 00:11:23.983 "flush": true, 00:11:23.983 "reset": true, 00:11:23.983 "compare": false, 00:11:23.983 "compare_and_write": false, 00:11:23.983 "abort": true, 00:11:23.983 "nvme_admin": false, 00:11:23.983 "nvme_io": false 00:11:23.983 }, 00:11:23.983 "memory_domains": [ 00:11:23.983 { 00:11:23.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.983 "dma_device_type": 2 00:11:23.983 } 00:11:23.983 ], 00:11:23.983 "driver_specific": {} 00:11:23.983 } 00:11:23.983 ] 00:11:23.983 19:12:01 -- common/autotest_common.sh@893 -- # return 0 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:23.983 "name": "Existed_Raid", 00:11:23.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.983 "strip_size_kb": 0, 00:11:23.983 "state": "configuring", 00:11:23.983 "raid_level": "raid1", 00:11:23.983 "superblock": false, 00:11:23.983 "num_base_bdevs": 4, 00:11:23.983 "num_base_bdevs_discovered": 3, 00:11:23.983 "num_base_bdevs_operational": 4, 00:11:23.983 "base_bdevs_list": [ 00:11:23.983 { 00:11:23.983 "name": "BaseBdev1", 00:11:23.983 "uuid": "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda", 00:11:23.983 "is_configured": true, 00:11:23.983 "data_offset": 0, 00:11:23.983 "data_size": 65536 00:11:23.983 }, 00:11:23.983 { 00:11:23.983 "name": "BaseBdev2", 00:11:23.983 "uuid": "ed6c4938-cb6c-11ee-af6b-4feeebbbadda", 00:11:23.983 "is_configured": true, 00:11:23.983 "data_offset": 0, 00:11:23.983 "data_size": 65536 00:11:23.983 }, 00:11:23.983 { 00:11:23.983 "name": "BaseBdev3", 00:11:23.983 "uuid": "ee1d4bba-cb6c-11ee-af6b-4feeebbbadda", 00:11:23.983 "is_configured": true, 00:11:23.983 "data_offset": 0, 00:11:23.983 "data_size": 65536 00:11:23.983 }, 00:11:23.983 { 00:11:23.983 "name": "BaseBdev4", 00:11:23.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.983 "is_configured": false, 00:11:23.983 "data_offset": 0, 00:11:23.983 "data_size": 0 00:11:23.983 } 00:11:23.983 ] 00:11:23.983 }' 00:11:23.983 19:12:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:23.983 19:12:01 -- common/autotest_common.sh@10 -- # set +x 00:11:24.551 19:12:01 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.552 [2024-02-14 19:12:01.818897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.552 [2024-02-14 19:12:01.818918] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82aba0a00 00:11:24.552 [2024-02-14 19:12:01.818921] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:24.552 [2024-02-14 19:12:01.818937] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac03ec0 00:11:24.552 [2024-02-14 19:12:01.819011] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82aba0a00 00:11:24.552 [2024-02-14 19:12:01.819014] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82aba0a00 00:11:24.552 [2024-02-14 19:12:01.819036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.552 BaseBdev4 00:11:24.552 19:12:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:11:24.552 19:12:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:11:24.552 19:12:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:24.552 19:12:01 -- common/autotest_common.sh@887 -- # local i 00:11:24.552 19:12:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:24.552 19:12:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:24.552 19:12:01 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:24.811 19:12:02 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.811 [ 00:11:24.811 { 00:11:24.811 "name": "BaseBdev4", 00:11:24.811 "aliases": [ 00:11:24.811 "eeca07dc-cb6c-11ee-af6b-4feeebbbadda" 00:11:24.811 ], 00:11:24.811 "product_name": "Malloc disk", 00:11:24.811 "block_size": 512, 00:11:24.811 "num_blocks": 65536, 00:11:24.811 "uuid": "eeca07dc-cb6c-11ee-af6b-4feeebbbadda", 00:11:24.811 "assigned_rate_limits": { 00:11:24.811 "rw_ios_per_sec": 0, 00:11:24.811 "rw_mbytes_per_sec": 0, 00:11:24.811 "r_mbytes_per_sec": 0, 00:11:24.811 "w_mbytes_per_sec": 0 00:11:24.811 }, 00:11:24.811 "claimed": true, 00:11:24.811 "claim_type": "exclusive_write", 00:11:24.811 "zoned": false, 00:11:24.811 "supported_io_types": { 00:11:24.811 "read": true, 00:11:24.811 "write": true, 00:11:24.811 "unmap": true, 00:11:24.811 "write_zeroes": true, 00:11:24.811 "flush": true, 00:11:24.811 "reset": true, 00:11:24.811 "compare": false, 00:11:24.811 "compare_and_write": false, 00:11:24.811 "abort": true, 00:11:24.811 "nvme_admin": false, 00:11:24.811 "nvme_io": false 00:11:24.811 }, 00:11:24.811 "memory_domains": [ 00:11:24.811 { 00:11:24.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.811 "dma_device_type": 2 00:11:24.811 } 00:11:24.811 ], 00:11:24.811 "driver_specific": {} 00:11:24.811 } 00:11:24.811 ] 00:11:24.811 19:12:02 -- common/autotest_common.sh@893 -- # return 0 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.811 19:12:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.071 19:12:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:25.071 "name": "Existed_Raid", 00:11:25.071 "uuid": "eeca0bcb-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.071 "strip_size_kb": 0, 00:11:25.071 "state": "online", 00:11:25.071 "raid_level": "raid1", 00:11:25.071 "superblock": false, 00:11:25.071 "num_base_bdevs": 4, 00:11:25.071 "num_base_bdevs_discovered": 4, 00:11:25.071 "num_base_bdevs_operational": 4, 00:11:25.071 "base_bdevs_list": [ 00:11:25.071 { 00:11:25.071 "name": "BaseBdev1", 00:11:25.071 "uuid": "ec1b5c93-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.071 "is_configured": true, 00:11:25.071 "data_offset": 0, 00:11:25.071 "data_size": 65536 00:11:25.071 }, 00:11:25.071 { 00:11:25.071 "name": "BaseBdev2", 00:11:25.071 "uuid": "ed6c4938-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.071 "is_configured": true, 00:11:25.071 "data_offset": 0, 00:11:25.071 "data_size": 65536 00:11:25.071 }, 00:11:25.071 { 00:11:25.071 "name": "BaseBdev3", 00:11:25.071 "uuid": "ee1d4bba-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.071 "is_configured": true, 00:11:25.071 "data_offset": 0, 00:11:25.071 "data_size": 65536 00:11:25.071 }, 00:11:25.071 { 00:11:25.071 "name": "BaseBdev4", 00:11:25.071 "uuid": "eeca07dc-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.071 "is_configured": true, 00:11:25.071 "data_offset": 0, 00:11:25.071 "data_size": 65536 00:11:25.071 } 00:11:25.071 ] 00:11:25.071 }' 00:11:25.071 19:12:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:25.071 19:12:02 -- common/autotest_common.sh@10 -- # set +x 00:11:25.330 19:12:02 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:25.589 [2024-02-14 19:12:02.826905] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.589 19:12:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.848 19:12:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:25.848 "name": "Existed_Raid", 00:11:25.848 "uuid": "eeca0bcb-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.848 "strip_size_kb": 0, 00:11:25.848 "state": "online", 00:11:25.848 "raid_level": "raid1", 00:11:25.848 "superblock": false, 00:11:25.848 "num_base_bdevs": 4, 00:11:25.848 "num_base_bdevs_discovered": 3, 00:11:25.848 "num_base_bdevs_operational": 3, 00:11:25.848 "base_bdevs_list": [ 00:11:25.848 { 00:11:25.848 "name": null, 00:11:25.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.848 "is_configured": false, 00:11:25.848 "data_offset": 0, 00:11:25.848 "data_size": 65536 00:11:25.848 }, 00:11:25.848 { 00:11:25.848 "name": "BaseBdev2", 00:11:25.848 "uuid": "ed6c4938-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.848 "is_configured": true, 00:11:25.848 "data_offset": 0, 00:11:25.848 "data_size": 65536 00:11:25.848 }, 00:11:25.848 { 00:11:25.848 "name": "BaseBdev3", 00:11:25.848 "uuid": "ee1d4bba-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.848 "is_configured": true, 00:11:25.848 "data_offset": 0, 00:11:25.848 "data_size": 65536 00:11:25.848 }, 00:11:25.848 { 00:11:25.848 "name": "BaseBdev4", 00:11:25.848 "uuid": "eeca07dc-cb6c-11ee-af6b-4feeebbbadda", 00:11:25.848 "is_configured": true, 00:11:25.848 "data_offset": 0, 00:11:25.848 "data_size": 65536 00:11:25.848 } 00:11:25.848 ] 00:11:25.848 }' 00:11:25.848 19:12:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:25.848 19:12:03 -- common/autotest_common.sh@10 -- # set +x 00:11:26.106 19:12:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:26.106 19:12:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:26.106 19:12:03 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.106 19:12:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:26.364 19:12:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:26.364 19:12:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.364 19:12:03 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:26.624 [2024-02-14 19:12:03.783550] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.624 19:12:03 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:26.899 [2024-02-14 19:12:04.148153] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.899 19:12:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:26.899 19:12:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:26.899 19:12:04 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.899 19:12:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:11:27.157 [2024-02-14 19:12:04.528766] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:27.157 [2024-02-14 19:12:04.528780] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.157 [2024-02-14 19:12:04.528789] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.157 [2024-02-14 19:12:04.533390] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.157 [2024-02-14 19:12:04.533401] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82aba0a00 name Existed_Raid, state offline 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.157 19:12:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.415 19:12:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:27.415 19:12:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:27.415 19:12:04 -- bdev/bdev_raid.sh@287 -- # killprocess 54032 00:11:27.415 19:12:04 -- common/autotest_common.sh@924 -- # '[' -z 54032 ']' 00:11:27.415 19:12:04 -- common/autotest_common.sh@928 -- # kill -0 54032 00:11:27.415 19:12:04 -- common/autotest_common.sh@929 -- # uname 00:11:27.415 19:12:04 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:11:27.415 19:12:04 -- common/autotest_common.sh@932 -- # ps -c -o command 54032 00:11:27.415 19:12:04 -- common/autotest_common.sh@932 -- # tail -1 00:11:27.415 19:12:04 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:11:27.415 killing process with pid 54032 00:11:27.415 19:12:04 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:11:27.415 19:12:04 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 54032' 00:11:27.415 19:12:04 -- common/autotest_common.sh@943 -- # kill 54032 00:11:27.415 [2024-02-14 19:12:04.721260] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.415 [2024-02-14 19:12:04.721292] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.415 19:12:04 -- common/autotest_common.sh@948 -- # wait 54032 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:27.673 00:11:27.673 real 0m9.738s 00:11:27.673 user 0m17.133s 00:11:27.673 sys 0m1.606s 00:11:27.673 ************************************ 00:11:27.673 END TEST raid_state_function_test 00:11:27.673 ************************************ 00:11:27.673 19:12:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.673 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:27.673 19:12:04 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:11:27.673 19:12:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:27.673 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:27.673 ************************************ 00:11:27.673 START TEST raid_state_function_test_sb 00:11:27.673 ************************************ 00:11:27.673 19:12:04 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 4 true 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:27.673 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=54302 00:11:27.674 Process raid pid: 54302 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 54302' 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 54302 /var/tmp/spdk-raid.sock 00:11:27.674 19:12:04 -- common/autotest_common.sh@817 -- # '[' -z 54302 ']' 00:11:27.674 19:12:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:27.674 19:12:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:27.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:27.674 19:12:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:27.674 19:12:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:27.674 19:12:04 -- common/autotest_common.sh@10 -- # set +x 00:11:27.674 19:12:04 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:27.674 [2024-02-14 19:12:04.916045] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:27.674 [2024-02-14 19:12:04.916370] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:28.608 EAL: TSC is not safe to use in SMP mode 00:11:28.608 EAL: TSC is not invariant 00:11:28.608 [2024-02-14 19:12:05.672034] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.608 [2024-02-14 19:12:05.750670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.608 [2024-02-14 19:12:05.751097] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.608 [2024-02-14 19:12:05.751101] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.608 19:12:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.608 19:12:05 -- common/autotest_common.sh@850 -- # return 0 00:11:28.608 19:12:05 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:28.866 [2024-02-14 19:12:06.057367] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.866 [2024-02-14 19:12:06.057410] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.866 [2024-02-14 19:12:06.057414] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.866 [2024-02-14 19:12:06.057427] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.866 [2024-02-14 19:12:06.057430] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.866 [2024-02-14 19:12:06.057436] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.866 [2024-02-14 19:12:06.057438] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:28.866 [2024-02-14 19:12:06.057460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.866 19:12:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.124 19:12:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:29.124 "name": "Existed_Raid", 00:11:29.124 "uuid": "f150c7c0-cb6c-11ee-af6b-4feeebbbadda", 00:11:29.124 "strip_size_kb": 0, 00:11:29.124 "state": "configuring", 00:11:29.124 "raid_level": "raid1", 00:11:29.124 "superblock": true, 00:11:29.124 "num_base_bdevs": 4, 00:11:29.124 "num_base_bdevs_discovered": 0, 00:11:29.124 "num_base_bdevs_operational": 4, 00:11:29.124 "base_bdevs_list": [ 00:11:29.124 { 00:11:29.124 "name": "BaseBdev1", 00:11:29.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.124 "is_configured": false, 00:11:29.124 "data_offset": 0, 00:11:29.124 "data_size": 0 00:11:29.124 }, 00:11:29.124 { 00:11:29.124 "name": "BaseBdev2", 00:11:29.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.124 "is_configured": false, 00:11:29.124 "data_offset": 0, 00:11:29.124 "data_size": 0 00:11:29.124 }, 00:11:29.124 { 00:11:29.124 "name": "BaseBdev3", 00:11:29.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.124 "is_configured": false, 00:11:29.124 "data_offset": 0, 00:11:29.124 "data_size": 0 00:11:29.124 }, 00:11:29.124 { 00:11:29.124 "name": "BaseBdev4", 00:11:29.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.124 "is_configured": false, 00:11:29.124 "data_offset": 0, 00:11:29.124 "data_size": 0 00:11:29.124 } 00:11:29.124 ] 00:11:29.124 }' 00:11:29.124 19:12:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:29.124 19:12:06 -- common/autotest_common.sh@10 -- # set +x 00:11:29.124 19:12:06 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:29.382 [2024-02-14 19:12:06.693357] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.382 [2024-02-14 19:12:06.693374] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6f9500 name Existed_Raid, state configuring 00:11:29.382 19:12:06 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:29.640 [2024-02-14 19:12:06.913372] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.640 [2024-02-14 19:12:06.913408] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.640 [2024-02-14 19:12:06.913411] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.640 [2024-02-14 19:12:06.913417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.640 [2024-02-14 19:12:06.913425] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.640 [2024-02-14 19:12:06.913431] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.640 [2024-02-14 19:12:06.913450] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:29.640 [2024-02-14 19:12:06.913455] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:29.640 19:12:06 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.899 [2024-02-14 19:12:07.134202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.899 BaseBdev1 00:11:29.899 19:12:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:29.899 19:12:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:11:29.899 19:12:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:29.899 19:12:07 -- common/autotest_common.sh@887 -- # local i 00:11:29.899 19:12:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:29.899 19:12:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:29.899 19:12:07 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.159 19:12:07 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:30.159 [ 00:11:30.159 { 00:11:30.159 "name": "BaseBdev1", 00:11:30.159 "aliases": [ 00:11:30.159 "f1f4f756-cb6c-11ee-af6b-4feeebbbadda" 00:11:30.159 ], 00:11:30.159 "product_name": "Malloc disk", 00:11:30.159 "block_size": 512, 00:11:30.159 "num_blocks": 65536, 00:11:30.159 "uuid": "f1f4f756-cb6c-11ee-af6b-4feeebbbadda", 00:11:30.159 "assigned_rate_limits": { 00:11:30.159 "rw_ios_per_sec": 0, 00:11:30.159 "rw_mbytes_per_sec": 0, 00:11:30.159 "r_mbytes_per_sec": 0, 00:11:30.159 "w_mbytes_per_sec": 0 00:11:30.159 }, 00:11:30.159 "claimed": true, 00:11:30.159 "claim_type": "exclusive_write", 00:11:30.159 "zoned": false, 00:11:30.159 "supported_io_types": { 00:11:30.159 "read": true, 00:11:30.159 "write": true, 00:11:30.159 "unmap": true, 00:11:30.159 "write_zeroes": true, 00:11:30.159 "flush": true, 00:11:30.159 "reset": true, 00:11:30.159 "compare": false, 00:11:30.159 "compare_and_write": false, 00:11:30.159 "abort": true, 00:11:30.159 "nvme_admin": false, 00:11:30.159 "nvme_io": false 00:11:30.159 }, 00:11:30.159 "memory_domains": [ 00:11:30.159 { 00:11:30.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.159 "dma_device_type": 2 00:11:30.159 } 00:11:30.159 ], 00:11:30.159 "driver_specific": {} 00:11:30.159 } 00:11:30.159 ] 00:11:30.159 19:12:07 -- common/autotest_common.sh@893 -- # return 0 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.159 19:12:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.418 19:12:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:30.418 "name": "Existed_Raid", 00:11:30.418 "uuid": "f1d3657e-cb6c-11ee-af6b-4feeebbbadda", 00:11:30.418 "strip_size_kb": 0, 00:11:30.418 "state": "configuring", 00:11:30.418 "raid_level": "raid1", 00:11:30.418 "superblock": true, 00:11:30.418 "num_base_bdevs": 4, 00:11:30.418 "num_base_bdevs_discovered": 1, 00:11:30.418 "num_base_bdevs_operational": 4, 00:11:30.418 "base_bdevs_list": [ 00:11:30.418 { 00:11:30.418 "name": "BaseBdev1", 00:11:30.418 "uuid": "f1f4f756-cb6c-11ee-af6b-4feeebbbadda", 00:11:30.418 "is_configured": true, 00:11:30.418 "data_offset": 2048, 00:11:30.418 "data_size": 63488 00:11:30.418 }, 00:11:30.418 { 00:11:30.418 "name": "BaseBdev2", 00:11:30.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.418 "is_configured": false, 00:11:30.418 "data_offset": 0, 00:11:30.418 "data_size": 0 00:11:30.418 }, 00:11:30.418 { 00:11:30.418 "name": "BaseBdev3", 00:11:30.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.418 "is_configured": false, 00:11:30.418 "data_offset": 0, 00:11:30.418 "data_size": 0 00:11:30.418 }, 00:11:30.418 { 00:11:30.418 "name": "BaseBdev4", 00:11:30.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.418 "is_configured": false, 00:11:30.418 "data_offset": 0, 00:11:30.418 "data_size": 0 00:11:30.418 } 00:11:30.418 ] 00:11:30.418 }' 00:11:30.418 19:12:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:30.418 19:12:07 -- common/autotest_common.sh@10 -- # set +x 00:11:30.676 19:12:07 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:30.676 [2024-02-14 19:12:08.029384] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.676 [2024-02-14 19:12:08.029406] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6f9500 name Existed_Raid, state configuring 00:11:30.676 19:12:08 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:30.676 19:12:08 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:30.935 19:12:08 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.193 BaseBdev1 00:11:31.193 19:12:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:31.193 19:12:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:11:31.193 19:12:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:31.193 19:12:08 -- common/autotest_common.sh@887 -- # local i 00:11:31.193 19:12:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:31.193 19:12:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:31.193 19:12:08 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:31.193 19:12:08 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.452 [ 00:11:31.452 { 00:11:31.452 "name": "BaseBdev1", 00:11:31.452 "aliases": [ 00:11:31.452 "f2b06e48-cb6c-11ee-af6b-4feeebbbadda" 00:11:31.452 ], 00:11:31.452 "product_name": "Malloc disk", 00:11:31.452 "block_size": 512, 00:11:31.452 "num_blocks": 65536, 00:11:31.452 "uuid": "f2b06e48-cb6c-11ee-af6b-4feeebbbadda", 00:11:31.452 "assigned_rate_limits": { 00:11:31.452 "rw_ios_per_sec": 0, 00:11:31.452 "rw_mbytes_per_sec": 0, 00:11:31.452 "r_mbytes_per_sec": 0, 00:11:31.452 "w_mbytes_per_sec": 0 00:11:31.452 }, 00:11:31.452 "claimed": false, 00:11:31.452 "zoned": false, 00:11:31.452 "supported_io_types": { 00:11:31.452 "read": true, 00:11:31.452 "write": true, 00:11:31.452 "unmap": true, 00:11:31.452 "write_zeroes": true, 00:11:31.452 "flush": true, 00:11:31.452 "reset": true, 00:11:31.452 "compare": false, 00:11:31.452 "compare_and_write": false, 00:11:31.452 "abort": true, 00:11:31.452 "nvme_admin": false, 00:11:31.452 "nvme_io": false 00:11:31.452 }, 00:11:31.452 "memory_domains": [ 00:11:31.452 { 00:11:31.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.452 "dma_device_type": 2 00:11:31.452 } 00:11:31.452 ], 00:11:31.452 "driver_specific": {} 00:11:31.452 } 00:11:31.452 ] 00:11:31.711 19:12:08 -- common/autotest_common.sh@893 -- # return 0 00:11:31.711 19:12:08 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:31.970 [2024-02-14 19:12:09.133965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.970 [2024-02-14 19:12:09.134358] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.970 [2024-02-14 19:12:09.134387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.970 [2024-02-14 19:12:09.134391] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.970 [2024-02-14 19:12:09.134397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.970 [2024-02-14 19:12:09.134400] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.970 [2024-02-14 19:12:09.134422] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:31.970 "name": "Existed_Raid", 00:11:31.970 "uuid": "f3263b54-cb6c-11ee-af6b-4feeebbbadda", 00:11:31.970 "strip_size_kb": 0, 00:11:31.970 "state": "configuring", 00:11:31.970 "raid_level": "raid1", 00:11:31.970 "superblock": true, 00:11:31.970 "num_base_bdevs": 4, 00:11:31.970 "num_base_bdevs_discovered": 1, 00:11:31.970 "num_base_bdevs_operational": 4, 00:11:31.970 "base_bdevs_list": [ 00:11:31.970 { 00:11:31.970 "name": "BaseBdev1", 00:11:31.970 "uuid": "f2b06e48-cb6c-11ee-af6b-4feeebbbadda", 00:11:31.970 "is_configured": true, 00:11:31.970 "data_offset": 2048, 00:11:31.970 "data_size": 63488 00:11:31.970 }, 00:11:31.970 { 00:11:31.970 "name": "BaseBdev2", 00:11:31.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.970 "is_configured": false, 00:11:31.970 "data_offset": 0, 00:11:31.970 "data_size": 0 00:11:31.970 }, 00:11:31.970 { 00:11:31.970 "name": "BaseBdev3", 00:11:31.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.970 "is_configured": false, 00:11:31.970 "data_offset": 0, 00:11:31.970 "data_size": 0 00:11:31.970 }, 00:11:31.970 { 00:11:31.970 "name": "BaseBdev4", 00:11:31.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.970 "is_configured": false, 00:11:31.970 "data_offset": 0, 00:11:31.970 "data_size": 0 00:11:31.970 } 00:11:31.970 ] 00:11:31.970 }' 00:11:31.970 19:12:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:31.970 19:12:09 -- common/autotest_common.sh@10 -- # set +x 00:11:32.253 19:12:09 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.513 [2024-02-14 19:12:09.846065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.513 BaseBdev2 00:11:32.513 19:12:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:32.513 19:12:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:11:32.513 19:12:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:32.513 19:12:09 -- common/autotest_common.sh@887 -- # local i 00:11:32.513 19:12:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:32.513 19:12:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:32.513 19:12:09 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:32.771 19:12:10 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.029 [ 00:11:33.029 { 00:11:33.029 "name": "BaseBdev2", 00:11:33.029 "aliases": [ 00:11:33.029 "f392e062-cb6c-11ee-af6b-4feeebbbadda" 00:11:33.029 ], 00:11:33.029 "product_name": "Malloc disk", 00:11:33.029 "block_size": 512, 00:11:33.029 "num_blocks": 65536, 00:11:33.029 "uuid": "f392e062-cb6c-11ee-af6b-4feeebbbadda", 00:11:33.029 "assigned_rate_limits": { 00:11:33.029 "rw_ios_per_sec": 0, 00:11:33.029 "rw_mbytes_per_sec": 0, 00:11:33.029 "r_mbytes_per_sec": 0, 00:11:33.029 "w_mbytes_per_sec": 0 00:11:33.029 }, 00:11:33.029 "claimed": true, 00:11:33.029 "claim_type": "exclusive_write", 00:11:33.029 "zoned": false, 00:11:33.029 "supported_io_types": { 00:11:33.029 "read": true, 00:11:33.029 "write": true, 00:11:33.029 "unmap": true, 00:11:33.029 "write_zeroes": true, 00:11:33.029 "flush": true, 00:11:33.029 "reset": true, 00:11:33.029 "compare": false, 00:11:33.029 "compare_and_write": false, 00:11:33.029 "abort": true, 00:11:33.029 "nvme_admin": false, 00:11:33.029 "nvme_io": false 00:11:33.029 }, 00:11:33.029 "memory_domains": [ 00:11:33.029 { 00:11:33.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.029 "dma_device_type": 2 00:11:33.029 } 00:11:33.029 ], 00:11:33.029 "driver_specific": {} 00:11:33.029 } 00:11:33.029 ] 00:11:33.029 19:12:10 -- common/autotest_common.sh@893 -- # return 0 00:11:33.029 19:12:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:33.029 19:12:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:33.029 19:12:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.030 19:12:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.288 19:12:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:33.288 "name": "Existed_Raid", 00:11:33.288 "uuid": "f3263b54-cb6c-11ee-af6b-4feeebbbadda", 00:11:33.288 "strip_size_kb": 0, 00:11:33.288 "state": "configuring", 00:11:33.288 "raid_level": "raid1", 00:11:33.288 "superblock": true, 00:11:33.288 "num_base_bdevs": 4, 00:11:33.288 "num_base_bdevs_discovered": 2, 00:11:33.288 "num_base_bdevs_operational": 4, 00:11:33.288 "base_bdevs_list": [ 00:11:33.289 { 00:11:33.289 "name": "BaseBdev1", 00:11:33.289 "uuid": "f2b06e48-cb6c-11ee-af6b-4feeebbbadda", 00:11:33.289 "is_configured": true, 00:11:33.289 "data_offset": 2048, 00:11:33.289 "data_size": 63488 00:11:33.289 }, 00:11:33.289 { 00:11:33.289 "name": "BaseBdev2", 00:11:33.289 "uuid": "f392e062-cb6c-11ee-af6b-4feeebbbadda", 00:11:33.289 "is_configured": true, 00:11:33.289 "data_offset": 2048, 00:11:33.289 "data_size": 63488 00:11:33.289 }, 00:11:33.289 { 00:11:33.289 "name": "BaseBdev3", 00:11:33.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.289 "is_configured": false, 00:11:33.289 "data_offset": 0, 00:11:33.289 "data_size": 0 00:11:33.289 }, 00:11:33.289 { 00:11:33.289 "name": "BaseBdev4", 00:11:33.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.289 "is_configured": false, 00:11:33.289 "data_offset": 0, 00:11:33.289 "data_size": 0 00:11:33.289 } 00:11:33.289 ] 00:11:33.289 }' 00:11:33.289 19:12:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:33.289 19:12:10 -- common/autotest_common.sh@10 -- # set +x 00:11:33.547 19:12:10 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.547 [2024-02-14 19:12:10.946061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.547 BaseBdev3 00:11:33.547 19:12:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:33.547 19:12:10 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:11:33.547 19:12:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:33.547 19:12:10 -- common/autotest_common.sh@887 -- # local i 00:11:33.547 19:12:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:33.547 19:12:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:33.547 19:12:10 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.805 19:12:11 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.063 [ 00:11:34.063 { 00:11:34.063 "name": "BaseBdev3", 00:11:34.063 "aliases": [ 00:11:34.063 "f43aba3f-cb6c-11ee-af6b-4feeebbbadda" 00:11:34.063 ], 00:11:34.063 "product_name": "Malloc disk", 00:11:34.063 "block_size": 512, 00:11:34.063 "num_blocks": 65536, 00:11:34.063 "uuid": "f43aba3f-cb6c-11ee-af6b-4feeebbbadda", 00:11:34.063 "assigned_rate_limits": { 00:11:34.063 "rw_ios_per_sec": 0, 00:11:34.063 "rw_mbytes_per_sec": 0, 00:11:34.064 "r_mbytes_per_sec": 0, 00:11:34.064 "w_mbytes_per_sec": 0 00:11:34.064 }, 00:11:34.064 "claimed": true, 00:11:34.064 "claim_type": "exclusive_write", 00:11:34.064 "zoned": false, 00:11:34.064 "supported_io_types": { 00:11:34.064 "read": true, 00:11:34.064 "write": true, 00:11:34.064 "unmap": true, 00:11:34.064 "write_zeroes": true, 00:11:34.064 "flush": true, 00:11:34.064 "reset": true, 00:11:34.064 "compare": false, 00:11:34.064 "compare_and_write": false, 00:11:34.064 "abort": true, 00:11:34.064 "nvme_admin": false, 00:11:34.064 "nvme_io": false 00:11:34.064 }, 00:11:34.064 "memory_domains": [ 00:11:34.064 { 00:11:34.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.064 "dma_device_type": 2 00:11:34.064 } 00:11:34.064 ], 00:11:34.064 "driver_specific": {} 00:11:34.064 } 00:11:34.064 ] 00:11:34.064 19:12:11 -- common/autotest_common.sh@893 -- # return 0 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.064 19:12:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.323 19:12:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:34.323 "name": "Existed_Raid", 00:11:34.323 "uuid": "f3263b54-cb6c-11ee-af6b-4feeebbbadda", 00:11:34.323 "strip_size_kb": 0, 00:11:34.323 "state": "configuring", 00:11:34.323 "raid_level": "raid1", 00:11:34.323 "superblock": true, 00:11:34.323 "num_base_bdevs": 4, 00:11:34.323 "num_base_bdevs_discovered": 3, 00:11:34.323 "num_base_bdevs_operational": 4, 00:11:34.323 "base_bdevs_list": [ 00:11:34.323 { 00:11:34.323 "name": "BaseBdev1", 00:11:34.323 "uuid": "f2b06e48-cb6c-11ee-af6b-4feeebbbadda", 00:11:34.323 "is_configured": true, 00:11:34.323 "data_offset": 2048, 00:11:34.323 "data_size": 63488 00:11:34.323 }, 00:11:34.323 { 00:11:34.323 "name": "BaseBdev2", 00:11:34.323 "uuid": "f392e062-cb6c-11ee-af6b-4feeebbbadda", 00:11:34.323 "is_configured": true, 00:11:34.323 "data_offset": 2048, 00:11:34.323 "data_size": 63488 00:11:34.323 }, 00:11:34.323 { 00:11:34.323 "name": "BaseBdev3", 00:11:34.323 "uuid": "f43aba3f-cb6c-11ee-af6b-4feeebbbadda", 00:11:34.323 "is_configured": true, 00:11:34.323 "data_offset": 2048, 00:11:34.323 "data_size": 63488 00:11:34.323 }, 00:11:34.323 { 00:11:34.323 "name": "BaseBdev4", 00:11:34.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.323 "is_configured": false, 00:11:34.323 "data_offset": 0, 00:11:34.323 "data_size": 0 00:11:34.323 } 00:11:34.323 ] 00:11:34.323 }' 00:11:34.323 19:12:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:34.323 19:12:11 -- common/autotest_common.sh@10 -- # set +x 00:11:34.583 19:12:11 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:11:34.583 [2024-02-14 19:12:11.978085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:34.583 [2024-02-14 19:12:11.978152] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b6f9a00 00:11:34.583 [2024-02-14 19:12:11.978156] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.583 [2024-02-14 19:12:11.978171] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b75cec0 00:11:34.583 [2024-02-14 19:12:11.978202] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b6f9a00 00:11:34.583 [2024-02-14 19:12:11.978205] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b6f9a00 00:11:34.583 [2024-02-14 19:12:11.978219] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.583 BaseBdev4 00:11:34.583 19:12:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:11:34.583 19:12:11 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:11:34.583 19:12:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:34.583 19:12:11 -- common/autotest_common.sh@887 -- # local i 00:11:34.583 19:12:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:34.583 19:12:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:34.583 19:12:11 -- common/autotest_common.sh@890 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:34.842 19:12:12 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.100 [ 00:11:35.101 { 00:11:35.101 "name": "BaseBdev4", 00:11:35.101 "aliases": [ 00:11:35.101 "f4d8337f-cb6c-11ee-af6b-4feeebbbadda" 00:11:35.101 ], 00:11:35.101 "product_name": "Malloc disk", 00:11:35.101 "block_size": 512, 00:11:35.101 "num_blocks": 65536, 00:11:35.101 "uuid": "f4d8337f-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.101 "assigned_rate_limits": { 00:11:35.101 "rw_ios_per_sec": 0, 00:11:35.101 "rw_mbytes_per_sec": 0, 00:11:35.101 "r_mbytes_per_sec": 0, 00:11:35.101 "w_mbytes_per_sec": 0 00:11:35.101 }, 00:11:35.101 "claimed": true, 00:11:35.101 "claim_type": "exclusive_write", 00:11:35.101 "zoned": false, 00:11:35.101 "supported_io_types": { 00:11:35.101 "read": true, 00:11:35.101 "write": true, 00:11:35.101 "unmap": true, 00:11:35.101 "write_zeroes": true, 00:11:35.101 "flush": true, 00:11:35.101 "reset": true, 00:11:35.101 "compare": false, 00:11:35.101 "compare_and_write": false, 00:11:35.101 "abort": true, 00:11:35.101 "nvme_admin": false, 00:11:35.101 "nvme_io": false 00:11:35.101 }, 00:11:35.101 "memory_domains": [ 00:11:35.101 { 00:11:35.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.101 "dma_device_type": 2 00:11:35.101 } 00:11:35.101 ], 00:11:35.101 "driver_specific": {} 00:11:35.101 } 00:11:35.101 ] 00:11:35.101 19:12:12 -- common/autotest_common.sh@893 -- # return 0 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.101 19:12:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.360 19:12:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:35.360 "name": "Existed_Raid", 00:11:35.360 "uuid": "f3263b54-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.360 "strip_size_kb": 0, 00:11:35.360 "state": "online", 00:11:35.360 "raid_level": "raid1", 00:11:35.360 "superblock": true, 00:11:35.360 "num_base_bdevs": 4, 00:11:35.360 "num_base_bdevs_discovered": 4, 00:11:35.360 "num_base_bdevs_operational": 4, 00:11:35.360 "base_bdevs_list": [ 00:11:35.360 { 00:11:35.360 "name": "BaseBdev1", 00:11:35.360 "uuid": "f2b06e48-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.360 "is_configured": true, 00:11:35.360 "data_offset": 2048, 00:11:35.360 "data_size": 63488 00:11:35.360 }, 00:11:35.360 { 00:11:35.360 "name": "BaseBdev2", 00:11:35.360 "uuid": "f392e062-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.360 "is_configured": true, 00:11:35.360 "data_offset": 2048, 00:11:35.360 "data_size": 63488 00:11:35.360 }, 00:11:35.360 { 00:11:35.360 "name": "BaseBdev3", 00:11:35.360 "uuid": "f43aba3f-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.360 "is_configured": true, 00:11:35.360 "data_offset": 2048, 00:11:35.360 "data_size": 63488 00:11:35.360 }, 00:11:35.360 { 00:11:35.360 "name": "BaseBdev4", 00:11:35.360 "uuid": "f4d8337f-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.360 "is_configured": true, 00:11:35.360 "data_offset": 2048, 00:11:35.360 "data_size": 63488 00:11:35.360 } 00:11:35.360 ] 00:11:35.360 }' 00:11:35.360 19:12:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:35.360 19:12:12 -- common/autotest_common.sh@10 -- # set +x 00:11:35.619 19:12:12 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:35.619 [2024-02-14 19:12:13.026066] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:35.879 "name": "Existed_Raid", 00:11:35.879 "uuid": "f3263b54-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.879 "strip_size_kb": 0, 00:11:35.879 "state": "online", 00:11:35.879 "raid_level": "raid1", 00:11:35.879 "superblock": true, 00:11:35.879 "num_base_bdevs": 4, 00:11:35.879 "num_base_bdevs_discovered": 3, 00:11:35.879 "num_base_bdevs_operational": 3, 00:11:35.879 "base_bdevs_list": [ 00:11:35.879 { 00:11:35.879 "name": null, 00:11:35.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.879 "is_configured": false, 00:11:35.879 "data_offset": 2048, 00:11:35.879 "data_size": 63488 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "name": "BaseBdev2", 00:11:35.879 "uuid": "f392e062-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.879 "is_configured": true, 00:11:35.879 "data_offset": 2048, 00:11:35.879 "data_size": 63488 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "name": "BaseBdev3", 00:11:35.879 "uuid": "f43aba3f-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.879 "is_configured": true, 00:11:35.879 "data_offset": 2048, 00:11:35.879 "data_size": 63488 00:11:35.879 }, 00:11:35.879 { 00:11:35.879 "name": "BaseBdev4", 00:11:35.879 "uuid": "f4d8337f-cb6c-11ee-af6b-4feeebbbadda", 00:11:35.879 "is_configured": true, 00:11:35.879 "data_offset": 2048, 00:11:35.879 "data_size": 63488 00:11:35.879 } 00:11:35.879 ] 00:11:35.879 }' 00:11:35.879 19:12:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:35.879 19:12:13 -- common/autotest_common.sh@10 -- # set +x 00:11:36.138 19:12:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:36.138 19:12:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:36.138 19:12:13 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.138 19:12:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:36.397 19:12:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:36.397 19:12:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.397 19:12:13 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:36.657 [2024-02-14 19:12:13.970697] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.657 19:12:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:36.657 19:12:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:36.657 19:12:13 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.657 19:12:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:36.916 [2024-02-14 19:12:14.311293] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:36.916 19:12:14 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.175 19:12:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:37.175 19:12:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.175 19:12:14 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:11:37.434 [2024-02-14 19:12:14.727930] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:37.434 [2024-02-14 19:12:14.727946] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.434 [2024-02-14 19:12:14.727954] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.434 [2024-02-14 19:12:14.732549] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.435 [2024-02-14 19:12:14.732561] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b6f9a00 name Existed_Raid, state offline 00:11:37.435 19:12:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:37.435 19:12:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:37.435 19:12:14 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.435 19:12:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.694 19:12:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:37.694 19:12:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:37.694 19:12:14 -- bdev/bdev_raid.sh@287 -- # killprocess 54302 00:11:37.694 19:12:14 -- common/autotest_common.sh@924 -- # '[' -z 54302 ']' 00:11:37.694 19:12:14 -- common/autotest_common.sh@928 -- # kill -0 54302 00:11:37.694 19:12:14 -- common/autotest_common.sh@929 -- # uname 00:11:37.694 19:12:14 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:11:37.694 19:12:14 -- common/autotest_common.sh@932 -- # ps -c -o command 54302 00:11:37.694 19:12:14 -- common/autotest_common.sh@932 -- # tail -1 00:11:37.694 19:12:14 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:11:37.694 19:12:14 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:11:37.694 killing process with pid 54302 00:11:37.694 19:12:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 54302' 00:11:37.694 19:12:14 -- common/autotest_common.sh@943 -- # kill 54302 00:11:37.694 [2024-02-14 19:12:14.928142] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.694 [2024-02-14 19:12:14.928174] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:37.694 19:12:14 -- common/autotest_common.sh@948 -- # wait 54302 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:37.694 00:11:37.694 real 0m10.167s 00:11:37.694 user 0m17.548s 00:11:37.694 sys 0m2.054s 00:11:37.694 ************************************ 00:11:37.694 END TEST raid_state_function_test_sb 00:11:37.694 ************************************ 00:11:37.694 19:12:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:37.694 19:12:15 -- common/autotest_common.sh@10 -- # set +x 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:37.694 19:12:15 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:11:37.694 19:12:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:37.694 19:12:15 -- common/autotest_common.sh@10 -- # set +x 00:11:37.694 ************************************ 00:11:37.694 START TEST raid_superblock_test 00:11:37.694 ************************************ 00:11:37.694 19:12:15 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid1 4 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:11:37.694 19:12:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=54575 00:11:37.953 19:12:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 54575 /var/tmp/spdk-raid.sock 00:11:37.953 19:12:15 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:37.953 19:12:15 -- common/autotest_common.sh@817 -- # '[' -z 54575 ']' 00:11:37.953 19:12:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:37.953 19:12:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:37.953 19:12:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:37.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:37.953 19:12:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:37.953 19:12:15 -- common/autotest_common.sh@10 -- # set +x 00:11:37.953 [2024-02-14 19:12:15.119369] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:37.953 [2024-02-14 19:12:15.119581] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:38.213 EAL: TSC is not safe to use in SMP mode 00:11:38.213 EAL: TSC is not invariant 00:11:38.213 [2024-02-14 19:12:15.536279] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.213 [2024-02-14 19:12:15.611329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.213 [2024-02-14 19:12:15.611757] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.213 [2024-02-14 19:12:15.611761] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.781 19:12:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:38.781 19:12:16 -- common/autotest_common.sh@850 -- # return 0 00:11:38.781 19:12:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:38.782 19:12:16 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:39.041 malloc1 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.041 [2024-02-14 19:12:16.422078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.041 [2024-02-14 19:12:16.422122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.041 [2024-02-14 19:12:16.422645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b134780 00:11:39.041 [2024-02-14 19:12:16.422663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.041 [2024-02-14 19:12:16.423293] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.041 [2024-02-14 19:12:16.423314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.041 pt1 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.041 19:12:16 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:39.300 malloc2 00:11:39.300 19:12:16 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.559 [2024-02-14 19:12:16.822084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.559 [2024-02-14 19:12:16.822125] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.559 [2024-02-14 19:12:16.822164] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b134c80 00:11:39.559 [2024-02-14 19:12:16.822171] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.559 [2024-02-14 19:12:16.822579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.559 [2024-02-14 19:12:16.822599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.559 pt2 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.559 19:12:16 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:39.818 malloc3 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:39.818 [2024-02-14 19:12:17.218100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:39.818 [2024-02-14 19:12:17.218158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.818 [2024-02-14 19:12:17.218178] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135180 00:11:39.818 [2024-02-14 19:12:17.218184] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.818 [2024-02-14 19:12:17.218590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.818 [2024-02-14 19:12:17.218610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:39.818 pt3 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.818 19:12:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.819 19:12:17 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:11:40.077 malloc4 00:11:40.077 19:12:17 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:40.337 [2024-02-14 19:12:17.606111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:40.337 [2024-02-14 19:12:17.606172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.337 [2024-02-14 19:12:17.606194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135680 00:11:40.337 [2024-02-14 19:12:17.606200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.337 [2024-02-14 19:12:17.606607] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.337 [2024-02-14 19:12:17.606628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:40.337 pt4 00:11:40.337 19:12:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:40.337 19:12:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:40.337 19:12:17 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:11:40.597 [2024-02-14 19:12:17.778114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.597 [2024-02-14 19:12:17.778479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.597 [2024-02-14 19:12:17.778489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.597 [2024-02-14 19:12:17.778497] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:40.597 [2024-02-14 19:12:17.778540] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b135900 00:11:40.597 [2024-02-14 19:12:17.778544] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.597 [2024-02-14 19:12:17.778585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b197e20 00:11:40.597 [2024-02-14 19:12:17.778633] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b135900 00:11:40.597 [2024-02-14 19:12:17.778636] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b135900 00:11:40.597 [2024-02-14 19:12:17.778652] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.597 19:12:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.855 19:12:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:40.855 "name": "raid_bdev1", 00:11:40.855 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:40.855 "strip_size_kb": 0, 00:11:40.855 "state": "online", 00:11:40.855 "raid_level": "raid1", 00:11:40.855 "superblock": true, 00:11:40.855 "num_base_bdevs": 4, 00:11:40.855 "num_base_bdevs_discovered": 4, 00:11:40.855 "num_base_bdevs_operational": 4, 00:11:40.855 "base_bdevs_list": [ 00:11:40.855 { 00:11:40.855 "name": "pt1", 00:11:40.855 "uuid": "338411b5-db4f-2f5b-8fff-c2de80e811e4", 00:11:40.855 "is_configured": true, 00:11:40.855 "data_offset": 2048, 00:11:40.855 "data_size": 63488 00:11:40.855 }, 00:11:40.855 { 00:11:40.855 "name": "pt2", 00:11:40.855 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:40.855 "is_configured": true, 00:11:40.855 "data_offset": 2048, 00:11:40.855 "data_size": 63488 00:11:40.855 }, 00:11:40.855 { 00:11:40.855 "name": "pt3", 00:11:40.855 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:40.855 "is_configured": true, 00:11:40.855 "data_offset": 2048, 00:11:40.855 "data_size": 63488 00:11:40.855 }, 00:11:40.855 { 00:11:40.855 "name": "pt4", 00:11:40.855 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:40.855 "is_configured": true, 00:11:40.855 "data_offset": 2048, 00:11:40.855 "data_size": 63488 00:11:40.855 } 00:11:40.855 ] 00:11:40.855 }' 00:11:40.855 19:12:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:40.855 19:12:18 -- common/autotest_common.sh@10 -- # set +x 00:11:41.114 19:12:18 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:41.114 19:12:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:41.114 [2024-02-14 19:12:18.466143] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.114 19:12:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f84d3979-cb6c-11ee-af6b-4feeebbbadda 00:11:41.114 19:12:18 -- bdev/bdev_raid.sh@380 -- # '[' -z f84d3979-cb6c-11ee-af6b-4feeebbbadda ']' 00:11:41.114 19:12:18 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:41.373 [2024-02-14 19:12:18.626114] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.373 [2024-02-14 19:12:18.626126] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.373 [2024-02-14 19:12:18.626137] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.373 [2024-02-14 19:12:18.626150] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.373 [2024-02-14 19:12:18.626153] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b135900 name raid_bdev1, state offline 00:11:41.373 19:12:18 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.373 19:12:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:41.632 19:12:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:41.632 19:12:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:41.632 19:12:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.632 19:12:18 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:41.904 19:12:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.904 19:12:19 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:41.904 19:12:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.904 19:12:19 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:42.174 19:12:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:42.174 19:12:19 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:11:42.433 19:12:19 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:42.433 19:12:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:42.691 19:12:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:42.691 19:12:19 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:42.691 19:12:19 -- common/autotest_common.sh@638 -- # local es=0 00:11:42.691 19:12:19 -- common/autotest_common.sh@640 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:42.691 19:12:19 -- common/autotest_common.sh@626 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.692 19:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.692 19:12:19 -- common/autotest_common.sh@630 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.692 19:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.692 19:12:19 -- common/autotest_common.sh@632 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.692 19:12:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:42.692 19:12:19 -- common/autotest_common.sh@632 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.692 19:12:19 -- common/autotest_common.sh@632 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:42.692 19:12:19 -- common/autotest_common.sh@641 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:11:42.951 [2024-02-14 19:12:20.134159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:42.951 [2024-02-14 19:12:20.134608] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:42.951 [2024-02-14 19:12:20.134618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:42.951 [2024-02-14 19:12:20.134624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:42.951 [2024-02-14 19:12:20.134649] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:42.951 [2024-02-14 19:12:20.134681] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:42.951 [2024-02-14 19:12:20.134690] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:11:42.951 [2024-02-14 19:12:20.134697] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:11:42.951 [2024-02-14 19:12:20.134704] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.951 [2024-02-14 19:12:20.134707] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b135680 name raid_bdev1, state configuring 00:11:42.951 request: 00:11:42.951 { 00:11:42.951 "name": "raid_bdev1", 00:11:42.951 "raid_level": "raid1", 00:11:42.951 "base_bdevs": [ 00:11:42.951 "malloc1", 00:11:42.951 "malloc2", 00:11:42.951 "malloc3", 00:11:42.951 "malloc4" 00:11:42.951 ], 00:11:42.951 "superblock": false, 00:11:42.951 "method": "bdev_raid_create", 00:11:42.951 "req_id": 1 00:11:42.951 } 00:11:42.951 Got JSON-RPC error response 00:11:42.951 response: 00:11:42.951 { 00:11:42.951 "code": -17, 00:11:42.951 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:42.951 } 00:11:42.951 19:12:20 -- common/autotest_common.sh@641 -- # es=1 00:11:42.951 19:12:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:42.951 19:12:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:42.951 19:12:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:42.951 19:12:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:42.951 19:12:20 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.211 [2024-02-14 19:12:20.586168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.211 [2024-02-14 19:12:20.586207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.211 [2024-02-14 19:12:20.586246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135180 00:11:43.211 [2024-02-14 19:12:20.586252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.211 [2024-02-14 19:12:20.586691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.211 [2024-02-14 19:12:20.586712] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.211 [2024-02-14 19:12:20.586728] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:43.211 [2024-02-14 19:12:20.586737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.211 pt1 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.211 19:12:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.470 19:12:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:43.470 "name": "raid_bdev1", 00:11:43.470 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:43.470 "strip_size_kb": 0, 00:11:43.470 "state": "configuring", 00:11:43.470 "raid_level": "raid1", 00:11:43.470 "superblock": true, 00:11:43.470 "num_base_bdevs": 4, 00:11:43.470 "num_base_bdevs_discovered": 1, 00:11:43.470 "num_base_bdevs_operational": 4, 00:11:43.470 "base_bdevs_list": [ 00:11:43.470 { 00:11:43.470 "name": "pt1", 00:11:43.470 "uuid": "338411b5-db4f-2f5b-8fff-c2de80e811e4", 00:11:43.470 "is_configured": true, 00:11:43.470 "data_offset": 2048, 00:11:43.470 "data_size": 63488 00:11:43.470 }, 00:11:43.470 { 00:11:43.470 "name": null, 00:11:43.470 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:43.470 "is_configured": false, 00:11:43.470 "data_offset": 2048, 00:11:43.470 "data_size": 63488 00:11:43.470 }, 00:11:43.470 { 00:11:43.470 "name": null, 00:11:43.470 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:43.470 "is_configured": false, 00:11:43.470 "data_offset": 2048, 00:11:43.470 "data_size": 63488 00:11:43.470 }, 00:11:43.470 { 00:11:43.470 "name": null, 00:11:43.470 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:43.470 "is_configured": false, 00:11:43.470 "data_offset": 2048, 00:11:43.470 "data_size": 63488 00:11:43.470 } 00:11:43.470 ] 00:11:43.470 }' 00:11:43.470 19:12:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:43.470 19:12:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.729 19:12:21 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:11:43.729 19:12:21 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:43.989 [2024-02-14 19:12:21.290183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:43.989 [2024-02-14 19:12:21.290225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.989 [2024-02-14 19:12:21.290247] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b134780 00:11:43.989 [2024-02-14 19:12:21.290254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.989 [2024-02-14 19:12:21.290326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.989 [2024-02-14 19:12:21.290333] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:43.989 [2024-02-14 19:12:21.290347] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:43.989 [2024-02-14 19:12:21.290353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.989 pt2 00:11:43.989 19:12:21 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:44.248 [2024-02-14 19:12:21.530183] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.248 19:12:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.507 19:12:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:44.507 "name": "raid_bdev1", 00:11:44.507 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:44.507 "strip_size_kb": 0, 00:11:44.507 "state": "configuring", 00:11:44.507 "raid_level": "raid1", 00:11:44.507 "superblock": true, 00:11:44.507 "num_base_bdevs": 4, 00:11:44.507 "num_base_bdevs_discovered": 1, 00:11:44.507 "num_base_bdevs_operational": 4, 00:11:44.507 "base_bdevs_list": [ 00:11:44.507 { 00:11:44.507 "name": "pt1", 00:11:44.507 "uuid": "338411b5-db4f-2f5b-8fff-c2de80e811e4", 00:11:44.507 "is_configured": true, 00:11:44.507 "data_offset": 2048, 00:11:44.507 "data_size": 63488 00:11:44.507 }, 00:11:44.507 { 00:11:44.507 "name": null, 00:11:44.507 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:44.507 "is_configured": false, 00:11:44.507 "data_offset": 2048, 00:11:44.507 "data_size": 63488 00:11:44.507 }, 00:11:44.507 { 00:11:44.507 "name": null, 00:11:44.507 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:44.507 "is_configured": false, 00:11:44.507 "data_offset": 2048, 00:11:44.507 "data_size": 63488 00:11:44.507 }, 00:11:44.507 { 00:11:44.507 "name": null, 00:11:44.507 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:44.507 "is_configured": false, 00:11:44.507 "data_offset": 2048, 00:11:44.507 "data_size": 63488 00:11:44.507 } 00:11:44.507 ] 00:11:44.507 }' 00:11:44.507 19:12:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:44.507 19:12:21 -- common/autotest_common.sh@10 -- # set +x 00:11:44.766 19:12:21 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:44.766 19:12:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:44.766 19:12:21 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.766 [2024-02-14 19:12:22.150197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.766 [2024-02-14 19:12:22.150231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.766 [2024-02-14 19:12:22.150248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b134780 00:11:44.766 [2024-02-14 19:12:22.150255] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.766 [2024-02-14 19:12:22.150318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.766 [2024-02-14 19:12:22.150325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.766 [2024-02-14 19:12:22.150339] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:44.766 [2024-02-14 19:12:22.150344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.766 pt2 00:11:44.766 19:12:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:44.766 19:12:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:44.766 19:12:22 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.026 [2024-02-14 19:12:22.310199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.026 [2024-02-14 19:12:22.310226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.026 [2024-02-14 19:12:22.310255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135b80 00:11:45.026 [2024-02-14 19:12:22.310261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.026 [2024-02-14 19:12:22.310308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.026 [2024-02-14 19:12:22.310315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.026 [2024-02-14 19:12:22.310326] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:45.026 [2024-02-14 19:12:22.310331] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.026 pt3 00:11:45.026 19:12:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:45.026 19:12:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:45.026 19:12:22 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:45.286 [2024-02-14 19:12:22.470202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:45.286 [2024-02-14 19:12:22.470228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.286 [2024-02-14 19:12:22.470239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135900 00:11:45.286 [2024-02-14 19:12:22.470245] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.286 [2024-02-14 19:12:22.470307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.286 [2024-02-14 19:12:22.470314] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:45.286 [2024-02-14 19:12:22.470327] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:11:45.286 [2024-02-14 19:12:22.470332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.286 [2024-02-14 19:12:22.470349] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b134c80 00:11:45.286 [2024-02-14 19:12:22.470353] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.286 [2024-02-14 19:12:22.470367] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b197e20 00:11:45.286 [2024-02-14 19:12:22.470399] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b134c80 00:11:45.286 [2024-02-14 19:12:22.470402] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b134c80 00:11:45.286 [2024-02-14 19:12:22.470417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.286 pt4 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.286 19:12:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.545 19:12:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:45.545 "name": "raid_bdev1", 00:11:45.545 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:45.545 "strip_size_kb": 0, 00:11:45.545 "state": "online", 00:11:45.545 "raid_level": "raid1", 00:11:45.545 "superblock": true, 00:11:45.545 "num_base_bdevs": 4, 00:11:45.545 "num_base_bdevs_discovered": 4, 00:11:45.545 "num_base_bdevs_operational": 4, 00:11:45.545 "base_bdevs_list": [ 00:11:45.545 { 00:11:45.545 "name": "pt1", 00:11:45.545 "uuid": "338411b5-db4f-2f5b-8fff-c2de80e811e4", 00:11:45.545 "is_configured": true, 00:11:45.545 "data_offset": 2048, 00:11:45.545 "data_size": 63488 00:11:45.545 }, 00:11:45.545 { 00:11:45.545 "name": "pt2", 00:11:45.545 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:45.545 "is_configured": true, 00:11:45.545 "data_offset": 2048, 00:11:45.545 "data_size": 63488 00:11:45.545 }, 00:11:45.545 { 00:11:45.545 "name": "pt3", 00:11:45.545 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:45.545 "is_configured": true, 00:11:45.545 "data_offset": 2048, 00:11:45.545 "data_size": 63488 00:11:45.545 }, 00:11:45.545 { 00:11:45.545 "name": "pt4", 00:11:45.545 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:45.545 "is_configured": true, 00:11:45.545 "data_offset": 2048, 00:11:45.545 "data_size": 63488 00:11:45.545 } 00:11:45.545 ] 00:11:45.545 }' 00:11:45.545 19:12:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:45.545 19:12:22 -- common/autotest_common.sh@10 -- # set +x 00:11:45.803 19:12:22 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:45.803 19:12:22 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:45.803 [2024-02-14 19:12:23.202245] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.803 19:12:23 -- bdev/bdev_raid.sh@430 -- # '[' f84d3979-cb6c-11ee-af6b-4feeebbbadda '!=' f84d3979-cb6c-11ee-af6b-4feeebbbadda ']' 00:11:45.803 19:12:23 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:11:45.803 19:12:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:45.803 19:12:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:45.803 19:12:23 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:46.068 [2024-02-14 19:12:23.446231] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.068 19:12:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.329 19:12:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:46.329 "name": "raid_bdev1", 00:11:46.329 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:46.329 "strip_size_kb": 0, 00:11:46.329 "state": "online", 00:11:46.329 "raid_level": "raid1", 00:11:46.329 "superblock": true, 00:11:46.329 "num_base_bdevs": 4, 00:11:46.329 "num_base_bdevs_discovered": 3, 00:11:46.329 "num_base_bdevs_operational": 3, 00:11:46.329 "base_bdevs_list": [ 00:11:46.329 { 00:11:46.329 "name": null, 00:11:46.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.329 "is_configured": false, 00:11:46.329 "data_offset": 2048, 00:11:46.329 "data_size": 63488 00:11:46.329 }, 00:11:46.329 { 00:11:46.329 "name": "pt2", 00:11:46.329 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:46.329 "is_configured": true, 00:11:46.329 "data_offset": 2048, 00:11:46.329 "data_size": 63488 00:11:46.329 }, 00:11:46.329 { 00:11:46.329 "name": "pt3", 00:11:46.329 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:46.329 "is_configured": true, 00:11:46.329 "data_offset": 2048, 00:11:46.329 "data_size": 63488 00:11:46.329 }, 00:11:46.329 { 00:11:46.329 "name": "pt4", 00:11:46.329 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:46.329 "is_configured": true, 00:11:46.329 "data_offset": 2048, 00:11:46.329 "data_size": 63488 00:11:46.329 } 00:11:46.329 ] 00:11:46.329 }' 00:11:46.329 19:12:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:46.329 19:12:23 -- common/autotest_common.sh@10 -- # set +x 00:11:46.587 19:12:23 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:46.846 [2024-02-14 19:12:24.070240] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.846 [2024-02-14 19:12:24.070257] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.846 [2024-02-14 19:12:24.070269] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.846 [2024-02-14 19:12:24.070282] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.846 [2024-02-14 19:12:24.070286] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b134c80 name raid_bdev1, state offline 00:11:46.846 19:12:24 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.846 19:12:24 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:11:47.106 19:12:24 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:11:47.106 19:12:24 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:11:47.106 19:12:24 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:11:47.106 19:12:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:47.106 19:12:24 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:47.365 19:12:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:11:47.365 19:12:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:47.365 19:12:24 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:47.623 19:12:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:11:47.623 19:12:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:47.623 19:12:24 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:11:47.623 19:12:25 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:11:47.623 19:12:25 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:47.623 19:12:25 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:11:47.623 19:12:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:11:47.623 19:12:25 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.882 [2024-02-14 19:12:25.158261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.882 [2024-02-14 19:12:25.158313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.882 [2024-02-14 19:12:25.158335] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135900 00:11:47.882 [2024-02-14 19:12:25.158341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.882 [2024-02-14 19:12:25.158836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.882 [2024-02-14 19:12:25.158862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.882 [2024-02-14 19:12:25.158879] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:47.882 [2024-02-14 19:12:25.158887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.882 pt2 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.882 19:12:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.141 19:12:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:48.141 "name": "raid_bdev1", 00:11:48.141 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:48.141 "strip_size_kb": 0, 00:11:48.141 "state": "configuring", 00:11:48.141 "raid_level": "raid1", 00:11:48.141 "superblock": true, 00:11:48.141 "num_base_bdevs": 4, 00:11:48.141 "num_base_bdevs_discovered": 1, 00:11:48.141 "num_base_bdevs_operational": 3, 00:11:48.141 "base_bdevs_list": [ 00:11:48.141 { 00:11:48.141 "name": null, 00:11:48.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.141 "is_configured": false, 00:11:48.141 "data_offset": 2048, 00:11:48.141 "data_size": 63488 00:11:48.141 }, 00:11:48.141 { 00:11:48.141 "name": "pt2", 00:11:48.141 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:48.141 "is_configured": true, 00:11:48.141 "data_offset": 2048, 00:11:48.141 "data_size": 63488 00:11:48.141 }, 00:11:48.141 { 00:11:48.141 "name": null, 00:11:48.141 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:48.141 "is_configured": false, 00:11:48.141 "data_offset": 2048, 00:11:48.141 "data_size": 63488 00:11:48.141 }, 00:11:48.141 { 00:11:48.141 "name": null, 00:11:48.141 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:48.142 "is_configured": false, 00:11:48.142 "data_offset": 2048, 00:11:48.142 "data_size": 63488 00:11:48.142 } 00:11:48.142 ] 00:11:48.142 }' 00:11:48.142 19:12:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:48.142 19:12:25 -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 19:12:25 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:11:48.400 19:12:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:11:48.400 19:12:25 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:48.659 [2024-02-14 19:12:25.886282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:48.659 [2024-02-14 19:12:25.886319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.659 [2024-02-14 19:12:25.886358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135680 00:11:48.659 [2024-02-14 19:12:25.886365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.659 [2024-02-14 19:12:25.886433] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.659 [2024-02-14 19:12:25.886440] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:48.659 [2024-02-14 19:12:25.886454] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:48.659 [2024-02-14 19:12:25.886460] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:48.659 pt3 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.659 19:12:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.917 19:12:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:48.917 "name": "raid_bdev1", 00:11:48.917 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:48.917 "strip_size_kb": 0, 00:11:48.917 "state": "configuring", 00:11:48.917 "raid_level": "raid1", 00:11:48.917 "superblock": true, 00:11:48.917 "num_base_bdevs": 4, 00:11:48.917 "num_base_bdevs_discovered": 2, 00:11:48.917 "num_base_bdevs_operational": 3, 00:11:48.917 "base_bdevs_list": [ 00:11:48.917 { 00:11:48.917 "name": null, 00:11:48.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.917 "is_configured": false, 00:11:48.917 "data_offset": 2048, 00:11:48.917 "data_size": 63488 00:11:48.917 }, 00:11:48.917 { 00:11:48.917 "name": "pt2", 00:11:48.917 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:48.917 "is_configured": true, 00:11:48.917 "data_offset": 2048, 00:11:48.917 "data_size": 63488 00:11:48.917 }, 00:11:48.917 { 00:11:48.917 "name": "pt3", 00:11:48.917 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:48.917 "is_configured": true, 00:11:48.917 "data_offset": 2048, 00:11:48.917 "data_size": 63488 00:11:48.917 }, 00:11:48.917 { 00:11:48.917 "name": null, 00:11:48.917 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:48.917 "is_configured": false, 00:11:48.917 "data_offset": 2048, 00:11:48.917 "data_size": 63488 00:11:48.917 } 00:11:48.917 ] 00:11:48.917 }' 00:11:48.917 19:12:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:48.917 19:12:26 -- common/autotest_common.sh@10 -- # set +x 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@462 -- # i=3 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:49.175 [2024-02-14 19:12:26.570293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:49.175 [2024-02-14 19:12:26.570327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.175 [2024-02-14 19:12:26.570360] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b134c80 00:11:49.175 [2024-02-14 19:12:26.570366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.175 [2024-02-14 19:12:26.570427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.175 [2024-02-14 19:12:26.570434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:49.175 [2024-02-14 19:12:26.570447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:11:49.175 [2024-02-14 19:12:26.570453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:49.175 [2024-02-14 19:12:26.570472] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b134780 00:11:49.175 [2024-02-14 19:12:26.570475] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.175 [2024-02-14 19:12:26.570489] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b197e20 00:11:49.175 [2024-02-14 19:12:26.570517] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b134780 00:11:49.175 [2024-02-14 19:12:26.570520] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b134780 00:11:49.175 [2024-02-14 19:12:26.570535] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.175 pt4 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:49.175 19:12:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:49.176 19:12:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.176 19:12:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.433 19:12:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:49.434 "name": "raid_bdev1", 00:11:49.434 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:49.434 "strip_size_kb": 0, 00:11:49.434 "state": "online", 00:11:49.434 "raid_level": "raid1", 00:11:49.434 "superblock": true, 00:11:49.434 "num_base_bdevs": 4, 00:11:49.434 "num_base_bdevs_discovered": 3, 00:11:49.434 "num_base_bdevs_operational": 3, 00:11:49.434 "base_bdevs_list": [ 00:11:49.434 { 00:11:49.434 "name": null, 00:11:49.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.434 "is_configured": false, 00:11:49.434 "data_offset": 2048, 00:11:49.434 "data_size": 63488 00:11:49.434 }, 00:11:49.434 { 00:11:49.434 "name": "pt2", 00:11:49.434 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:49.434 "is_configured": true, 00:11:49.434 "data_offset": 2048, 00:11:49.434 "data_size": 63488 00:11:49.434 }, 00:11:49.434 { 00:11:49.434 "name": "pt3", 00:11:49.434 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:49.434 "is_configured": true, 00:11:49.434 "data_offset": 2048, 00:11:49.434 "data_size": 63488 00:11:49.434 }, 00:11:49.434 { 00:11:49.434 "name": "pt4", 00:11:49.434 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:49.434 "is_configured": true, 00:11:49.434 "data_offset": 2048, 00:11:49.434 "data_size": 63488 00:11:49.434 } 00:11:49.434 ] 00:11:49.434 }' 00:11:49.434 19:12:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:49.434 19:12:26 -- common/autotest_common.sh@10 -- # set +x 00:11:49.692 19:12:26 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:11:49.692 19:12:26 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:49.950 [2024-02-14 19:12:27.130302] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.950 [2024-02-14 19:12:27.130317] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.950 [2024-02-14 19:12:27.130333] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.950 [2024-02-14 19:12:27.130360] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.950 [2024-02-14 19:12:27.130364] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b134780 name raid_bdev1, state offline 00:11:49.950 19:12:27 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.950 19:12:27 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:11:50.209 19:12:27 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:11:50.209 19:12:27 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:11:50.209 19:12:27 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.209 [2024-02-14 19:12:27.594317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.209 [2024-02-14 19:12:27.594359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.209 [2024-02-14 19:12:27.594395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135b80 00:11:50.209 [2024-02-14 19:12:27.594402] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.209 [2024-02-14 19:12:27.594858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.209 [2024-02-14 19:12:27.594880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.209 [2024-02-14 19:12:27.594896] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:50.209 [2024-02-14 19:12:27.594904] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.209 pt1 00:11:50.209 19:12:27 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:50.209 19:12:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:50.209 19:12:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.210 19:12:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.468 19:12:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:50.468 "name": "raid_bdev1", 00:11:50.468 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:50.468 "strip_size_kb": 0, 00:11:50.468 "state": "configuring", 00:11:50.468 "raid_level": "raid1", 00:11:50.468 "superblock": true, 00:11:50.468 "num_base_bdevs": 4, 00:11:50.468 "num_base_bdevs_discovered": 1, 00:11:50.468 "num_base_bdevs_operational": 4, 00:11:50.468 "base_bdevs_list": [ 00:11:50.468 { 00:11:50.468 "name": "pt1", 00:11:50.468 "uuid": "338411b5-db4f-2f5b-8fff-c2de80e811e4", 00:11:50.468 "is_configured": true, 00:11:50.468 "data_offset": 2048, 00:11:50.468 "data_size": 63488 00:11:50.468 }, 00:11:50.468 { 00:11:50.468 "name": null, 00:11:50.468 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:50.468 "is_configured": false, 00:11:50.468 "data_offset": 2048, 00:11:50.468 "data_size": 63488 00:11:50.468 }, 00:11:50.468 { 00:11:50.468 "name": null, 00:11:50.468 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:50.468 "is_configured": false, 00:11:50.468 "data_offset": 2048, 00:11:50.468 "data_size": 63488 00:11:50.469 }, 00:11:50.469 { 00:11:50.469 "name": null, 00:11:50.469 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:50.469 "is_configured": false, 00:11:50.469 "data_offset": 2048, 00:11:50.469 "data_size": 63488 00:11:50.469 } 00:11:50.469 ] 00:11:50.469 }' 00:11:50.469 19:12:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:50.469 19:12:27 -- common/autotest_common.sh@10 -- # set +x 00:11:50.727 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:11:50.727 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:50.727 19:12:28 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:50.986 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:11:50.986 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:50.986 19:12:28 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@489 -- # i=3 00:11:51.244 19:12:28 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:51.503 [2024-02-14 19:12:28.854341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:51.503 [2024-02-14 19:12:28.854379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.503 [2024-02-14 19:12:28.854417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b134c80 00:11:51.503 [2024-02-14 19:12:28.854424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.503 [2024-02-14 19:12:28.854497] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.503 [2024-02-14 19:12:28.854505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:51.503 [2024-02-14 19:12:28.854519] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:11:51.503 [2024-02-14 19:12:28.854523] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:51.503 [2024-02-14 19:12:28.854526] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.503 [2024-02-14 19:12:28.854530] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b135180 name raid_bdev1, state configuring 00:11:51.503 [2024-02-14 19:12:28.854540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:51.503 pt4 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.503 19:12:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.763 19:12:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:51.763 "name": "raid_bdev1", 00:11:51.763 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:51.763 "strip_size_kb": 0, 00:11:51.763 "state": "configuring", 00:11:51.763 "raid_level": "raid1", 00:11:51.763 "superblock": true, 00:11:51.763 "num_base_bdevs": 4, 00:11:51.763 "num_base_bdevs_discovered": 1, 00:11:51.763 "num_base_bdevs_operational": 3, 00:11:51.763 "base_bdevs_list": [ 00:11:51.763 { 00:11:51.763 "name": null, 00:11:51.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.763 "is_configured": false, 00:11:51.763 "data_offset": 2048, 00:11:51.763 "data_size": 63488 00:11:51.763 }, 00:11:51.763 { 00:11:51.763 "name": null, 00:11:51.763 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:51.763 "is_configured": false, 00:11:51.763 "data_offset": 2048, 00:11:51.763 "data_size": 63488 00:11:51.763 }, 00:11:51.763 { 00:11:51.763 "name": null, 00:11:51.763 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:51.763 "is_configured": false, 00:11:51.763 "data_offset": 2048, 00:11:51.763 "data_size": 63488 00:11:51.763 }, 00:11:51.763 { 00:11:51.763 "name": "pt4", 00:11:51.763 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:51.763 "is_configured": true, 00:11:51.763 "data_offset": 2048, 00:11:51.763 "data_size": 63488 00:11:51.763 } 00:11:51.763 ] 00:11:51.763 }' 00:11:51.763 19:12:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:51.763 19:12:29 -- common/autotest_common.sh@10 -- # set +x 00:11:52.022 19:12:29 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:11:52.022 19:12:29 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:11:52.022 19:12:29 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.293 [2024-02-14 19:12:29.530359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.293 [2024-02-14 19:12:29.530398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.293 [2024-02-14 19:12:29.530416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135680 00:11:52.293 [2024-02-14 19:12:29.530438] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.293 [2024-02-14 19:12:29.530499] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.293 [2024-02-14 19:12:29.530506] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.293 [2024-02-14 19:12:29.530519] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:52.293 [2024-02-14 19:12:29.530524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.293 pt2 00:11:52.293 19:12:29 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:11:52.293 19:12:29 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:11:52.293 19:12:29 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:52.561 [2024-02-14 19:12:29.754362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:52.561 [2024-02-14 19:12:29.754393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.561 [2024-02-14 19:12:29.754424] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b135900 00:11:52.561 [2024-02-14 19:12:29.754430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.561 [2024-02-14 19:12:29.754483] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.561 [2024-02-14 19:12:29.754490] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:52.561 [2024-02-14 19:12:29.754503] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:52.561 [2024-02-14 19:12:29.754508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:52.561 [2024-02-14 19:12:29.754525] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b135180 00:11:52.561 [2024-02-14 19:12:29.754529] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.561 [2024-02-14 19:12:29.754542] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b197e20 00:11:52.561 [2024-02-14 19:12:29.754570] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b135180 00:11:52.561 [2024-02-14 19:12:29.754572] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b135180 00:11:52.561 [2024-02-14 19:12:29.754586] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.561 pt3 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.561 19:12:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.819 19:12:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:52.819 "name": "raid_bdev1", 00:11:52.819 "uuid": "f84d3979-cb6c-11ee-af6b-4feeebbbadda", 00:11:52.819 "strip_size_kb": 0, 00:11:52.819 "state": "online", 00:11:52.819 "raid_level": "raid1", 00:11:52.819 "superblock": true, 00:11:52.819 "num_base_bdevs": 4, 00:11:52.819 "num_base_bdevs_discovered": 3, 00:11:52.819 "num_base_bdevs_operational": 3, 00:11:52.819 "base_bdevs_list": [ 00:11:52.819 { 00:11:52.819 "name": null, 00:11:52.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.819 "is_configured": false, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 }, 00:11:52.819 { 00:11:52.819 "name": "pt2", 00:11:52.819 "uuid": "ccf613ed-b950-df56-926a-eadc3658063e", 00:11:52.819 "is_configured": true, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 }, 00:11:52.819 { 00:11:52.819 "name": "pt3", 00:11:52.819 "uuid": "52af6f6b-ae0b-2a52-adcd-56eb229c74be", 00:11:52.819 "is_configured": true, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 }, 00:11:52.819 { 00:11:52.819 "name": "pt4", 00:11:52.819 "uuid": "5e2f2be0-4e3c-3c51-9f88-c6c3560b23e7", 00:11:52.819 "is_configured": true, 00:11:52.820 "data_offset": 2048, 00:11:52.820 "data_size": 63488 00:11:52.820 } 00:11:52.820 ] 00:11:52.820 }' 00:11:52.820 19:12:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:52.820 19:12:29 -- common/autotest_common.sh@10 -- # set +x 00:11:53.078 19:12:30 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:11:53.078 19:12:30 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:53.078 [2024-02-14 19:12:30.390401] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.078 19:12:30 -- bdev/bdev_raid.sh@506 -- # '[' f84d3979-cb6c-11ee-af6b-4feeebbbadda '!=' f84d3979-cb6c-11ee-af6b-4feeebbbadda ']' 00:11:53.078 19:12:30 -- bdev/bdev_raid.sh@511 -- # killprocess 54575 00:11:53.078 19:12:30 -- common/autotest_common.sh@924 -- # '[' -z 54575 ']' 00:11:53.078 19:12:30 -- common/autotest_common.sh@928 -- # kill -0 54575 00:11:53.078 19:12:30 -- common/autotest_common.sh@929 -- # uname 00:11:53.078 19:12:30 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:11:53.078 19:12:30 -- common/autotest_common.sh@932 -- # ps -c -o command 54575 00:11:53.078 19:12:30 -- common/autotest_common.sh@932 -- # tail -1 00:11:53.078 19:12:30 -- common/autotest_common.sh@932 -- # process_name=bdev_svc 00:11:53.078 19:12:30 -- common/autotest_common.sh@934 -- # '[' bdev_svc = sudo ']' 00:11:53.078 19:12:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 54575' 00:11:53.078 killing process with pid 54575 00:11:53.078 19:12:30 -- common/autotest_common.sh@943 -- # kill 54575 00:11:53.078 [2024-02-14 19:12:30.415949] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.078 [2024-02-14 19:12:30.415963] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.078 [2024-02-14 19:12:30.415987] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.078 [2024-02-14 19:12:30.415991] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b135180 name raid_bdev1, state offline 00:11:53.078 19:12:30 -- common/autotest_common.sh@948 -- # wait 54575 00:11:53.078 [2024-02-14 19:12:30.434370] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.337 19:12:30 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:53.337 00:11:53.337 real 0m15.457s 00:11:53.337 user 0m27.572s 00:11:53.337 sys 0m2.538s 00:11:53.337 ************************************ 00:11:53.337 END TEST raid_superblock_test 00:11:53.337 ************************************ 00:11:53.337 19:12:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.337 19:12:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.337 19:12:30 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:11:53.337 19:12:30 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:11:53.337 19:12:30 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:11:53.337 00:11:53.337 real 4m23.817s 00:11:53.337 user 7m28.454s 00:11:53.337 sys 0m54.448s 00:11:53.337 19:12:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.337 ************************************ 00:11:53.337 END TEST bdev_raid 00:11:53.337 ************************************ 00:11:53.337 19:12:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.337 19:12:30 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:11:53.337 19:12:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:53.337 19:12:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:53.337 19:12:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.337 ************************************ 00:11:53.337 START TEST bdevperf_config 00:11:53.337 ************************************ 00:11:53.337 19:12:30 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:11:53.596 * Looking for test storage... 00:11:53.596 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:11:53.596 19:12:30 -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:11:53.596 19:12:30 -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:11:53.596 19:12:30 -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:11:53.596 19:12:30 -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:11:53.596 19:12:30 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:53.596 19:12:30 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:11:53.596 19:12:30 -- bdevperf/common.sh@8 -- # local job_section=global 00:11:53.596 19:12:30 -- bdevperf/common.sh@9 -- # local rw=read 00:11:53.596 19:12:30 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:11:53.596 19:12:30 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:11:53.596 19:12:30 -- bdevperf/common.sh@13 -- # cat 00:11:53.596 19:12:30 -- bdevperf/common.sh@18 -- # job='[global]' 00:11:53.596 00:11:53.596 19:12:30 -- bdevperf/common.sh@19 -- # echo 00:11:53.596 19:12:30 -- bdevperf/common.sh@20 -- # cat 00:11:53.597 19:12:30 -- bdevperf/test_config.sh@18 -- # create_job job0 00:11:53.597 19:12:30 -- bdevperf/common.sh@8 -- # local job_section=job0 00:11:53.597 19:12:30 -- bdevperf/common.sh@9 -- # local rw= 00:11:53.597 19:12:30 -- bdevperf/common.sh@10 -- # local filename= 00:11:53.597 00:11:53.597 19:12:30 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:11:53.597 19:12:30 -- bdevperf/common.sh@18 -- # job='[job0]' 00:11:53.597 19:12:30 -- bdevperf/common.sh@19 -- # echo 00:11:53.597 19:12:30 -- bdevperf/common.sh@20 -- # cat 00:11:53.597 19:12:30 -- bdevperf/test_config.sh@19 -- # create_job job1 00:11:53.597 19:12:30 -- bdevperf/common.sh@8 -- # local job_section=job1 00:11:53.597 19:12:30 -- bdevperf/common.sh@9 -- # local rw= 00:11:53.597 19:12:30 -- bdevperf/common.sh@10 -- # local filename= 00:11:53.597 19:12:30 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:11:53.597 00:11:53.597 19:12:30 -- bdevperf/common.sh@18 -- # job='[job1]' 00:11:53.597 19:12:30 -- bdevperf/common.sh@19 -- # echo 00:11:53.597 19:12:30 -- bdevperf/common.sh@20 -- # cat 00:11:53.597 19:12:30 -- bdevperf/test_config.sh@20 -- # create_job job2 00:11:53.597 19:12:30 -- bdevperf/common.sh@8 -- # local job_section=job2 00:11:53.597 19:12:30 -- bdevperf/common.sh@9 -- # local rw= 00:11:53.597 19:12:30 -- bdevperf/common.sh@10 -- # local filename= 00:11:53.597 19:12:30 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:11:53.597 19:12:30 -- bdevperf/common.sh@18 -- # job='[job2]' 00:11:53.597 19:12:30 -- bdevperf/common.sh@19 -- # echo 00:11:53.597 00:11:53.597 19:12:30 -- bdevperf/common.sh@20 -- # cat 00:11:53.597 19:12:30 -- bdevperf/test_config.sh@21 -- # create_job job3 00:11:53.597 19:12:30 -- bdevperf/common.sh@8 -- # local job_section=job3 00:11:53.597 19:12:30 -- bdevperf/common.sh@9 -- # local rw= 00:11:53.597 19:12:30 -- bdevperf/common.sh@10 -- # local filename= 00:11:53.597 19:12:30 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:11:53.597 19:12:30 -- bdevperf/common.sh@18 -- # job='[job3]' 00:11:53.597 00:11:53.597 19:12:30 -- bdevperf/common.sh@19 -- # echo 00:11:53.597 19:12:30 -- bdevperf/common.sh@20 -- # cat 00:11:53.597 19:12:30 -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:11:56.883 19:12:33 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-02-14 19:12:30.859195] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:56.883 [2024-02-14 19:12:30.859383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:56.883 Using job config with 4 jobs 00:11:56.883 EAL: TSC is not safe to use in SMP mode 00:11:56.883 EAL: TSC is not invariant 00:11:56.883 [2024-02-14 19:12:31.319342] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.883 [2024-02-14 19:12:31.394330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.883 [2024-02-14 19:12:31.394386] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:56.883 cpumask for '\''job0'\'' is too big 00:11:56.883 cpumask for '\''job1'\'' is too big 00:11:56.883 cpumask for '\''job2'\'' is too big 00:11:56.883 cpumask for '\''job3'\'' is too big 00:11:56.883 Running I/O for 2 seconds... 00:11:56.883 00:11:56.883 Latency(us) 00:11:56.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.883 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.883 Malloc0 : 2.00 474681.35 463.56 0.00 0.00 539.12 143.36 1139.08 00:11:56.883 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.883 Malloc0 : 2.00 474665.21 463.54 0.00 0.00 539.05 136.53 971.34 00:11:56.883 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.883 Malloc0 : 2.00 474701.11 463.58 0.00 0.00 538.93 137.51 811.40 00:11:56.883 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.883 Malloc0 : 2.00 474679.76 463.55 0.00 0.00 538.87 148.24 628.05 00:11:56.883 =================================================================================================================== 00:11:56.883 Total : 1898727.42 1854.23 0.00 0.00 538.99 136.53 1139.08 00:11:56.883 [2024-02-14 19:12:33.422505] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:11:56.883 19:12:33 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-02-14 19:12:30.859195] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:56.884 [2024-02-14 19:12:30.859383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:56.884 Using job config with 4 jobs 00:11:56.884 EAL: TSC is not safe to use in SMP mode 00:11:56.884 EAL: TSC is not invariant 00:11:56.884 [2024-02-14 19:12:31.319342] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.884 [2024-02-14 19:12:31.394330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.884 [2024-02-14 19:12:31.394386] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:56.884 cpumask for '\''job0'\'' is too big 00:11:56.884 cpumask for '\''job1'\'' is too big 00:11:56.884 cpumask for '\''job2'\'' is too big 00:11:56.884 cpumask for '\''job3'\'' is too big 00:11:56.884 Running I/O for 2 seconds... 00:11:56.884 00:11:56.884 Latency(us) 00:11:56.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474681.35 463.56 0.00 0.00 539.12 143.36 1139.08 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474665.21 463.54 0.00 0.00 539.05 136.53 971.34 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474701.11 463.58 0.00 0.00 538.93 137.51 811.40 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474679.76 463.55 0.00 0.00 538.87 148.24 628.05 00:11:56.884 =================================================================================================================== 00:11:56.884 Total : 1898727.42 1854.23 0.00 0.00 538.99 136.53 1139.08 00:11:56.884 [2024-02-14 19:12:33.422505] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:11:56.884 19:12:33 -- bdevperf/common.sh@32 -- # echo '[2024-02-14 19:12:30.859195] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:56.884 [2024-02-14 19:12:30.859383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:56.884 Using job config with 4 jobs 00:11:56.884 EAL: TSC is not safe to use in SMP mode 00:11:56.884 EAL: TSC is not invariant 00:11:56.884 [2024-02-14 19:12:31.319342] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.884 [2024-02-14 19:12:31.394330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.884 [2024-02-14 19:12:31.394386] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:56.884 cpumask for '\''job0'\'' is too big 00:11:56.884 cpumask for '\''job1'\'' is too big 00:11:56.884 cpumask for '\''job2'\'' is too big 00:11:56.884 cpumask for '\''job3'\'' is too big 00:11:56.884 Running I/O for 2 seconds... 00:11:56.884 00:11:56.884 Latency(us) 00:11:56.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474681.35 463.56 0.00 0.00 539.12 143.36 1139.08 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474665.21 463.54 0.00 0.00 539.05 136.53 971.34 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474701.11 463.58 0.00 0.00 538.93 137.51 811.40 00:11:56.884 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:56.884 Malloc0 : 2.00 474679.76 463.55 0.00 0.00 538.87 148.24 628.05 00:11:56.884 =================================================================================================================== 00:11:56.884 Total : 1898727.42 1854.23 0.00 0.00 538.99 136.53 1139.08 00:11:56.884 [2024-02-14 19:12:33.422505] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:11:56.884 19:12:33 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:11:56.884 19:12:33 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:11:56.884 19:12:33 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:11:56.884 19:12:33 -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:11:56.884 [2024-02-14 19:12:33.588709] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:11:56.884 [2024-02-14 19:12:33.589009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:56.884 EAL: TSC is not safe to use in SMP mode 00:11:56.884 EAL: TSC is not invariant 00:11:56.884 [2024-02-14 19:12:34.029896] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.884 [2024-02-14 19:12:34.107926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.884 [2024-02-14 19:12:34.107978] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:56.884 cpumask for 'job0' is too big 00:11:56.884 cpumask for 'job1' is too big 00:11:56.884 cpumask for 'job2' is too big 00:11:56.884 cpumask for 'job3' is too big 00:11:58.791 [2024-02-14 19:12:36.136295] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:59.050 19:12:36 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:11:59.050 Running I/O for 2 seconds... 00:11:59.050 00:11:59.050 Latency(us) 00:11:59.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:59.050 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:59.050 Malloc0 : 2.00 478245.44 467.04 0.00 0.00 535.11 139.46 1115.67 00:11:59.050 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:59.050 Malloc0 : 2.00 478229.19 467.02 0.00 0.00 535.04 137.51 947.93 00:11:59.050 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:59.050 Malloc0 : 2.00 478265.58 467.06 0.00 0.00 534.92 144.34 780.19 00:11:59.050 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:11:59.050 Malloc0 : 2.00 478244.56 467.04 0.00 0.00 534.86 129.71 604.65 00:11:59.050 =================================================================================================================== 00:11:59.050 Total : 1912984.77 1868.15 0.00 0.00 534.98 129.71 1115.67' 00:11:59.050 19:12:36 -- bdevperf/test_config.sh@27 -- # cleanup 00:11:59.050 19:12:36 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:11:59.050 19:12:36 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:11:59.050 19:12:36 -- bdevperf/common.sh@8 -- # local job_section=job0 00:11:59.050 19:12:36 -- bdevperf/common.sh@9 -- # local rw=write 00:11:59.050 19:12:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:11:59.050 19:12:36 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:11:59.050 19:12:36 -- bdevperf/common.sh@18 -- # job='[job0]' 00:11:59.050 00:11:59.050 19:12:36 -- bdevperf/common.sh@19 -- # echo 00:11:59.050 19:12:36 -- bdevperf/common.sh@20 -- # cat 00:11:59.050 19:12:36 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:11:59.050 19:12:36 -- bdevperf/common.sh@8 -- # local job_section=job1 00:11:59.050 19:12:36 -- bdevperf/common.sh@9 -- # local rw=write 00:11:59.050 19:12:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:11:59.050 19:12:36 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:11:59.050 19:12:36 -- bdevperf/common.sh@18 -- # job='[job1]' 00:11:59.050 00:11:59.050 19:12:36 -- bdevperf/common.sh@19 -- # echo 00:11:59.050 19:12:36 -- bdevperf/common.sh@20 -- # cat 00:11:59.050 19:12:36 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:11:59.050 19:12:36 -- bdevperf/common.sh@8 -- # local job_section=job2 00:11:59.050 19:12:36 -- bdevperf/common.sh@9 -- # local rw=write 00:11:59.050 19:12:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:11:59.050 19:12:36 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:11:59.050 19:12:36 -- bdevperf/common.sh@18 -- # job='[job2]' 00:11:59.050 00:11:59.050 19:12:36 -- bdevperf/common.sh@19 -- # echo 00:11:59.050 19:12:36 -- bdevperf/common.sh@20 -- # cat 00:11:59.050 19:12:36 -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:12:02.341 19:12:39 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-02-14 19:12:36.305309] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:02.341 [2024-02-14 19:12:36.305496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:02.341 Using job config with 3 jobs 00:12:02.341 EAL: TSC is not safe to use in SMP mode 00:12:02.341 EAL: TSC is not invariant 00:12:02.341 [2024-02-14 19:12:36.780154] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.341 [2024-02-14 19:12:36.857271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.341 [2024-02-14 19:12:36.857314] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:02.341 cpumask for '\''job0'\'' is too big 00:12:02.341 cpumask for '\''job1'\'' is too big 00:12:02.341 cpumask for '\''job2'\'' is too big 00:12:02.341 Running I/O for 2 seconds... 00:12:02.341 00:12:02.341 Latency(us) 00:12:02.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550519.20 537.62 0.00 0.00 464.82 190.17 823.10 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550501.59 537.60 0.00 0.00 464.74 136.53 686.57 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550485.64 537.58 0.00 0.00 464.69 136.53 538.33 00:12:02.341 =================================================================================================================== 00:12:02.341 Total : 1651506.43 1612.80 0.00 0.00 464.75 136.53 823.10 00:12:02.341 [2024-02-14 19:12:38.884293] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:12:02.341 19:12:39 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-02-14 19:12:36.305309] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:02.341 [2024-02-14 19:12:36.305496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:02.341 Using job config with 3 jobs 00:12:02.341 EAL: TSC is not safe to use in SMP mode 00:12:02.341 EAL: TSC is not invariant 00:12:02.341 [2024-02-14 19:12:36.780154] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.341 [2024-02-14 19:12:36.857271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.341 [2024-02-14 19:12:36.857314] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:02.341 cpumask for '\''job0'\'' is too big 00:12:02.341 cpumask for '\''job1'\'' is too big 00:12:02.341 cpumask for '\''job2'\'' is too big 00:12:02.341 Running I/O for 2 seconds... 00:12:02.341 00:12:02.341 Latency(us) 00:12:02.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550519.20 537.62 0.00 0.00 464.82 190.17 823.10 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550501.59 537.60 0.00 0.00 464.74 136.53 686.57 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550485.64 537.58 0.00 0.00 464.69 136.53 538.33 00:12:02.341 =================================================================================================================== 00:12:02.341 Total : 1651506.43 1612.80 0.00 0.00 464.75 136.53 823.10 00:12:02.341 [2024-02-14 19:12:38.884293] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:12:02.341 19:12:39 -- bdevperf/common.sh@32 -- # echo '[2024-02-14 19:12:36.305309] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:02.341 [2024-02-14 19:12:36.305496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:02.341 Using job config with 3 jobs 00:12:02.341 EAL: TSC is not safe to use in SMP mode 00:12:02.341 EAL: TSC is not invariant 00:12:02.341 [2024-02-14 19:12:36.780154] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.341 [2024-02-14 19:12:36.857271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.341 [2024-02-14 19:12:36.857314] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:02.341 cpumask for '\''job0'\'' is too big 00:12:02.341 cpumask for '\''job1'\'' is too big 00:12:02.341 cpumask for '\''job2'\'' is too big 00:12:02.341 Running I/O for 2 seconds... 00:12:02.341 00:12:02.341 Latency(us) 00:12:02.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550519.20 537.62 0.00 0.00 464.82 190.17 823.10 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550501.59 537.60 0.00 0.00 464.74 136.53 686.57 00:12:02.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:12:02.341 Malloc0 : 2.00 550485.64 537.58 0.00 0.00 464.69 136.53 538.33 00:12:02.341 =================================================================================================================== 00:12:02.341 Total : 1651506.43 1612.80 0.00 0.00 464.75 136.53 823.10 00:12:02.341 [2024-02-14 19:12:38.884293] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:12:02.341 19:12:39 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:12:02.341 19:12:39 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:12:02.341 19:12:39 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:12:02.341 19:12:39 -- bdevperf/test_config.sh@35 -- # cleanup 00:12:02.341 19:12:39 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:12:02.341 19:12:39 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:12:02.342 19:12:39 -- bdevperf/common.sh@8 -- # local job_section=global 00:12:02.342 19:12:39 -- bdevperf/common.sh@9 -- # local rw=rw 00:12:02.342 19:12:39 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:12:02.342 19:12:39 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:12:02.342 19:12:39 -- bdevperf/common.sh@13 -- # cat 00:12:02.342 19:12:39 -- bdevperf/common.sh@18 -- # job='[global]' 00:12:02.342 00:12:02.342 19:12:39 -- bdevperf/common.sh@19 -- # echo 00:12:02.342 19:12:39 -- bdevperf/common.sh@20 -- # cat 00:12:02.342 19:12:39 -- bdevperf/test_config.sh@38 -- # create_job job0 00:12:02.342 19:12:39 -- bdevperf/common.sh@8 -- # local job_section=job0 00:12:02.342 19:12:39 -- bdevperf/common.sh@9 -- # local rw= 00:12:02.342 19:12:39 -- bdevperf/common.sh@10 -- # local filename= 00:12:02.342 19:12:39 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:12:02.342 19:12:39 -- bdevperf/common.sh@18 -- # job='[job0]' 00:12:02.342 00:12:02.342 19:12:39 -- bdevperf/common.sh@19 -- # echo 00:12:02.342 19:12:39 -- bdevperf/common.sh@20 -- # cat 00:12:02.342 19:12:39 -- bdevperf/test_config.sh@39 -- # create_job job1 00:12:02.342 19:12:39 -- bdevperf/common.sh@8 -- # local job_section=job1 00:12:02.342 19:12:39 -- bdevperf/common.sh@9 -- # local rw= 00:12:02.342 19:12:39 -- bdevperf/common.sh@10 -- # local filename= 00:12:02.342 19:12:39 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:12:02.342 19:12:39 -- bdevperf/common.sh@18 -- # job='[job1]' 00:12:02.342 00:12:02.342 19:12:39 -- bdevperf/common.sh@19 -- # echo 00:12:02.342 19:12:39 -- bdevperf/common.sh@20 -- # cat 00:12:02.342 19:12:39 -- bdevperf/test_config.sh@40 -- # create_job job2 00:12:02.342 19:12:39 -- bdevperf/common.sh@8 -- # local job_section=job2 00:12:02.342 19:12:39 -- bdevperf/common.sh@9 -- # local rw= 00:12:02.342 19:12:39 -- bdevperf/common.sh@10 -- # local filename= 00:12:02.342 19:12:39 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:12:02.342 19:12:39 -- bdevperf/common.sh@18 -- # job='[job2]' 00:12:02.342 19:12:39 -- bdevperf/common.sh@19 -- # echo 00:12:02.342 00:12:02.342 19:12:39 -- bdevperf/common.sh@20 -- # cat 00:12:02.342 19:12:39 -- bdevperf/test_config.sh@41 -- # create_job job3 00:12:02.342 19:12:39 -- bdevperf/common.sh@8 -- # local job_section=job3 00:12:02.342 19:12:39 -- bdevperf/common.sh@9 -- # local rw= 00:12:02.342 19:12:39 -- bdevperf/common.sh@10 -- # local filename= 00:12:02.342 19:12:39 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:12:02.342 00:12:02.342 19:12:39 -- bdevperf/common.sh@18 -- # job='[job3]' 00:12:02.342 19:12:39 -- bdevperf/common.sh@19 -- # echo 00:12:02.342 19:12:39 -- bdevperf/common.sh@20 -- # cat 00:12:02.342 19:12:39 -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:12:04.903 19:12:42 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-02-14 19:12:39.074642] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:04.903 [2024-02-14 19:12:39.074836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:04.903 Using job config with 4 jobs 00:12:04.903 EAL: TSC is not safe to use in SMP mode 00:12:04.903 EAL: TSC is not invariant 00:12:04.903 [2024-02-14 19:12:39.835549] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.903 [2024-02-14 19:12:39.915878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.903 [2024-02-14 19:12:39.915931] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:04.903 cpumask for '\''job0'\'' is too big 00:12:04.903 cpumask for '\''job1'\'' is too big 00:12:04.903 cpumask for '\''job2'\'' is too big 00:12:04.903 cpumask for '\''job3'\'' is too big 00:12:04.903 Running I/O for 2 seconds... 00:12:04.903 00:12:04.903 Latency(us) 00:12:04.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210258.33 205.33 0.00 0.00 1217.37 388.14 2496.61 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210249.61 205.32 0.00 0.00 1217.26 360.84 2481.00 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210269.30 205.34 0.00 0.00 1216.77 378.39 2090.91 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210257.86 205.33 0.00 0.00 1216.71 339.38 2075.31 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210249.36 205.32 0.00 0.00 1216.43 368.64 1693.01 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210238.65 205.31 0.00 0.00 1216.36 327.68 1677.41 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210228.78 205.30 0.00 0.00 1216.13 362.79 1466.76 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210218.29 205.29 0.00 0.00 1216.08 325.73 1482.36 00:12:04.903 =================================================================================================================== 00:12:04.903 Total : 1681970.18 1642.55 0.00 0.00 1216.64 325.73 2496.61 00:12:04.903 [2024-02-14 19:12:41.947152] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:12:04.903 19:12:42 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-02-14 19:12:39.074642] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:04.903 [2024-02-14 19:12:39.074836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:04.903 Using job config with 4 jobs 00:12:04.903 EAL: TSC is not safe to use in SMP mode 00:12:04.903 EAL: TSC is not invariant 00:12:04.903 [2024-02-14 19:12:39.835549] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.903 [2024-02-14 19:12:39.915878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.903 [2024-02-14 19:12:39.915931] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:04.903 cpumask for '\''job0'\'' is too big 00:12:04.903 cpumask for '\''job1'\'' is too big 00:12:04.903 cpumask for '\''job2'\'' is too big 00:12:04.903 cpumask for '\''job3'\'' is too big 00:12:04.903 Running I/O for 2 seconds... 00:12:04.903 00:12:04.903 Latency(us) 00:12:04.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210258.33 205.33 0.00 0.00 1217.37 388.14 2496.61 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210249.61 205.32 0.00 0.00 1217.26 360.84 2481.00 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210269.30 205.34 0.00 0.00 1216.77 378.39 2090.91 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210257.86 205.33 0.00 0.00 1216.71 339.38 2075.31 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210249.36 205.32 0.00 0.00 1216.43 368.64 1693.01 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210238.65 205.31 0.00 0.00 1216.36 327.68 1677.41 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210228.78 205.30 0.00 0.00 1216.13 362.79 1466.76 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210218.29 205.29 0.00 0.00 1216.08 325.73 1482.36 00:12:04.903 =================================================================================================================== 00:12:04.903 Total : 1681970.18 1642.55 0.00 0.00 1216.64 325.73 2496.61 00:12:04.903 [2024-02-14 19:12:41.947152] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:12:04.903 19:12:42 -- bdevperf/common.sh@32 -- # echo '[2024-02-14 19:12:39.074642] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:04.903 [2024-02-14 19:12:39.074836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:04.903 Using job config with 4 jobs 00:12:04.903 EAL: TSC is not safe to use in SMP mode 00:12:04.903 EAL: TSC is not invariant 00:12:04.903 [2024-02-14 19:12:39.835549] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.903 [2024-02-14 19:12:39.915878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.903 [2024-02-14 19:12:39.915931] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:04.903 cpumask for '\''job0'\'' is too big 00:12:04.903 cpumask for '\''job1'\'' is too big 00:12:04.903 cpumask for '\''job2'\'' is too big 00:12:04.903 cpumask for '\''job3'\'' is too big 00:12:04.903 Running I/O for 2 seconds... 00:12:04.903 00:12:04.903 Latency(us) 00:12:04.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210258.33 205.33 0.00 0.00 1217.37 388.14 2496.61 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210249.61 205.32 0.00 0.00 1217.26 360.84 2481.00 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210269.30 205.34 0.00 0.00 1216.77 378.39 2090.91 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210257.86 205.33 0.00 0.00 1216.71 339.38 2075.31 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210249.36 205.32 0.00 0.00 1216.43 368.64 1693.01 00:12:04.903 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc1 : 2.00 210238.65 205.31 0.00 0.00 1216.36 327.68 1677.41 00:12:04.903 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.903 Malloc0 : 2.00 210228.78 205.30 0.00 0.00 1216.13 362.79 1466.76 00:12:04.904 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:12:04.904 Malloc1 : 2.00 210218.29 205.29 0.00 0.00 1216.08 325.73 1482.36 00:12:04.904 =================================================================================================================== 00:12:04.904 Total : 1681970.18 1642.55 0.00 0.00 1216.64 325.73 2496.61 00:12:04.904 [2024-02-14 19:12:41.947152] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:12:04.904 19:12:42 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:12:04.904 19:12:42 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:12:04.904 19:12:42 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:12:04.904 19:12:42 -- bdevperf/test_config.sh@44 -- # cleanup 00:12:04.904 19:12:42 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:12:04.904 19:12:42 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:04.904 00:12:04.904 real 0m11.463s 00:12:04.904 user 0m9.094s 00:12:04.904 sys 0m2.405s 00:12:04.904 19:12:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:04.904 19:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:04.904 ************************************ 00:12:04.904 END TEST bdevperf_config 00:12:04.904 ************************************ 00:12:04.904 19:12:42 -- spdk/autotest.sh@198 -- # uname -s 00:12:04.904 19:12:42 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:12:04.904 19:12:42 -- spdk/autotest.sh@204 -- # uname -s 00:12:04.904 19:12:42 -- spdk/autotest.sh@204 -- # [[ FreeBSD == Linux ]] 00:12:04.904 19:12:42 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:12:04.904 19:12:42 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:04.904 19:12:42 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:04.904 19:12:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:04.904 19:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:04.904 ************************************ 00:12:04.904 START TEST blockdev_nvme 00:12:04.904 ************************************ 00:12:04.904 19:12:42 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:12:05.163 * Looking for test storage... 00:12:05.163 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:12:05.163 19:12:42 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:05.163 19:12:42 -- bdev/nbd_common.sh@6 -- # set -e 00:12:05.163 19:12:42 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:05.163 19:12:42 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:05.163 19:12:42 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:05.163 19:12:42 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:05.163 19:12:42 -- bdev/blockdev.sh@18 -- # : 00:12:05.163 19:12:42 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:05.163 19:12:42 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:05.163 19:12:42 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:05.163 19:12:42 -- bdev/blockdev.sh@672 -- # uname -s 00:12:05.163 19:12:42 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:12:05.163 19:12:42 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:12:05.163 19:12:42 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:12:05.163 19:12:42 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:05.163 19:12:42 -- bdev/blockdev.sh@682 -- # dek= 00:12:05.163 19:12:42 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:05.163 19:12:42 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:05.163 19:12:42 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:05.163 19:12:42 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:12:05.163 19:12:42 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:12:05.163 19:12:42 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:05.163 19:12:42 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=55125 00:12:05.163 19:12:42 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:05.163 19:12:42 -- bdev/blockdev.sh@47 -- # waitforlisten 55125 00:12:05.163 19:12:42 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:05.163 19:12:42 -- common/autotest_common.sh@817 -- # '[' -z 55125 ']' 00:12:05.163 19:12:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.163 19:12:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.163 19:12:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.163 19:12:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.163 19:12:42 -- common/autotest_common.sh@10 -- # set +x 00:12:05.163 [2024-02-14 19:12:42.372384] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:05.163 [2024-02-14 19:12:42.372599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:05.422 EAL: TSC is not safe to use in SMP mode 00:12:05.422 EAL: TSC is not invariant 00:12:05.422 [2024-02-14 19:12:42.828224] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.681 [2024-02-14 19:12:42.908390] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.681 [2024-02-14 19:12:42.908498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.247 19:12:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:06.247 19:12:43 -- common/autotest_common.sh@850 -- # return 0 00:12:06.247 19:12:43 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:06.247 19:12:43 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:12:06.247 19:12:43 -- bdev/blockdev.sh@79 -- # local json 00:12:06.247 19:12:43 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:12:06.247 19:12:43 -- bdev/blockdev.sh@80 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:06.247 19:12:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:12:06.247 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.247 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.247 [2024-02-14 19:12:43.579551] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:06.247 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.247 19:12:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:06.247 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.247 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.247 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.247 19:12:43 -- bdev/blockdev.sh@738 -- # cat 00:12:06.247 19:12:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:06.247 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.247 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.247 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.247 19:12:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:06.247 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.247 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.505 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.505 19:12:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:06.505 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.505 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.505 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.505 19:12:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:06.505 19:12:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:06.505 19:12:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:06.505 19:12:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:06.505 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.505 19:12:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:06.505 19:12:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:06.505 19:12:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "07b63d19-cb6d-11ee-af6b-4feeebbbadda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "07b63d19-cb6d-11ee-af6b-4feeebbbadda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:12:06.505 19:12:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:06.506 19:12:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:06.506 19:12:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:12:06.506 19:12:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:06.506 19:12:43 -- bdev/blockdev.sh@752 -- # killprocess 55125 00:12:06.506 19:12:43 -- common/autotest_common.sh@924 -- # '[' -z 55125 ']' 00:12:06.506 19:12:43 -- common/autotest_common.sh@928 -- # kill -0 55125 00:12:06.506 19:12:43 -- common/autotest_common.sh@929 -- # uname 00:12:06.506 19:12:43 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:12:06.506 19:12:43 -- common/autotest_common.sh@932 -- # ps -c -o command 55125 00:12:06.506 19:12:43 -- common/autotest_common.sh@932 -- # tail -1 00:12:06.506 19:12:43 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:12:06.506 19:12:43 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:12:06.506 killing process with pid 55125 00:12:06.506 19:12:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 55125' 00:12:06.506 19:12:43 -- common/autotest_common.sh@943 -- # kill 55125 00:12:06.506 19:12:43 -- common/autotest_common.sh@948 -- # wait 55125 00:12:06.764 19:12:43 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:06.764 19:12:43 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:06.764 19:12:43 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:12:06.764 19:12:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:06.764 19:12:43 -- common/autotest_common.sh@10 -- # set +x 00:12:06.764 ************************************ 00:12:06.764 START TEST bdev_hello_world 00:12:06.764 ************************************ 00:12:06.764 19:12:43 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:12:06.764 [2024-02-14 19:12:43.963280] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:06.764 [2024-02-14 19:12:43.963506] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:07.331 EAL: TSC is not safe to use in SMP mode 00:12:07.331 EAL: TSC is not invariant 00:12:07.331 [2024-02-14 19:12:44.698039] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.589 [2024-02-14 19:12:44.776519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.589 [2024-02-14 19:12:44.776575] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:07.589 [2024-02-14 19:12:44.831250] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:07.589 [2024-02-14 19:12:44.900518] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:07.589 [2024-02-14 19:12:44.900556] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:12:07.589 [2024-02-14 19:12:44.900567] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:07.589 [2024-02-14 19:12:44.901104] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:07.589 [2024-02-14 19:12:44.901466] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:07.589 [2024-02-14 19:12:44.901488] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:07.589 [2024-02-14 19:12:44.901665] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:07.589 00:12:07.589 [2024-02-14 19:12:44.901679] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:07.589 [2024-02-14 19:12:44.901740] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:07.848 00:12:07.848 real 0m1.095s 00:12:07.848 user 0m0.321s 00:12:07.848 sys 0m0.772s 00:12:07.848 19:12:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:07.848 19:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:07.848 ************************************ 00:12:07.848 END TEST bdev_hello_world 00:12:07.848 ************************************ 00:12:07.848 19:12:45 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:07.848 19:12:45 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:07.848 19:12:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:07.848 19:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:07.848 ************************************ 00:12:07.848 START TEST bdev_bounds 00:12:07.848 ************************************ 00:12:07.848 19:12:45 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:12:07.848 19:12:45 -- bdev/blockdev.sh@288 -- # bdevio_pid=55184 00:12:07.848 19:12:45 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:07.848 Process bdevio pid: 55184 00:12:07.848 19:12:45 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 55184' 00:12:07.848 19:12:45 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:07.848 19:12:45 -- bdev/blockdev.sh@291 -- # waitforlisten 55184 00:12:07.848 19:12:45 -- common/autotest_common.sh@817 -- # '[' -z 55184 ']' 00:12:07.848 19:12:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.848 19:12:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.848 19:12:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.848 19:12:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.848 19:12:45 -- common/autotest_common.sh@10 -- # set +x 00:12:07.848 [2024-02-14 19:12:45.101290] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:07.848 [2024-02-14 19:12:45.101485] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:08.783 EAL: TSC is not safe to use in SMP mode 00:12:08.783 EAL: TSC is not invariant 00:12:08.783 [2024-02-14 19:12:45.848324] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.783 [2024-02-14 19:12:45.926267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.783 [2024-02-14 19:12:45.926320] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:08.783 [2024-02-14 19:12:45.926178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.783 [2024-02-14 19:12:45.926268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.783 [2024-02-14 19:12:45.980927] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:08.783 19:12:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:08.783 19:12:46 -- common/autotest_common.sh@850 -- # return 0 00:12:08.783 19:12:46 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:08.783 I/O targets: 00:12:08.783 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:08.783 00:12:08.783 00:12:08.783 CUnit - A unit testing framework for C - Version 2.1-3 00:12:08.783 http://cunit.sourceforge.net/ 00:12:08.783 00:12:08.783 00:12:08.783 Suite: bdevio tests on: Nvme0n1 00:12:08.783 Test: blockdev write read block ...passed 00:12:08.783 Test: blockdev write zeroes read block ...passed 00:12:08.783 Test: blockdev write zeroes read no split ...passed 00:12:08.783 Test: blockdev write zeroes read split ...passed 00:12:09.042 Test: blockdev write zeroes read split partial ...passed 00:12:09.042 Test: blockdev reset ...[2024-02-14 19:12:46.203584] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:12:09.042 [2024-02-14 19:12:46.204567] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:09.042 passed 00:12:09.042 Test: blockdev write read 8 blocks ...passed 00:12:09.042 Test: blockdev write read size > 128k ...passed 00:12:09.042 Test: blockdev write read invalid size ...passed 00:12:09.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.042 Test: blockdev write read max offset ...passed 00:12:09.042 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.042 Test: blockdev writev readv 8 blocks ...passed 00:12:09.042 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.042 Test: blockdev writev readv block ...passed 00:12:09.042 Test: blockdev writev readv size > 128k ...passed 00:12:09.042 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.042 Test: blockdev comparev and writev ...[2024-02-14 19:12:46.208574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x267947000 len:0x1000 00:12:09.042 [2024-02-14 19:12:46.208611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:12:09.042 passed 00:12:09.042 Test: blockdev nvme passthru rw ...passed 00:12:09.042 Test: blockdev nvme passthru vendor specific ...[2024-02-14 19:12:46.209173] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:12:09.042 [2024-02-14 19:12:46.209188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:12:09.042 passed 00:12:09.042 Test: blockdev nvme admin passthru ...passed 00:12:09.042 Test: blockdev copy ...passed 00:12:09.042 00:12:09.042 Run Summary: Type Total Ran Passed Failed Inactive 00:12:09.042 suites 1 1 n/a 0 0 00:12:09.042 tests 23 23 23 0 0 00:12:09.042 asserts 152 152 152 0 n/a 00:12:09.042 00:12:09.042 Elapsed time = 0.047 seconds 00:12:09.042 0 00:12:09.042 19:12:46 -- bdev/blockdev.sh@293 -- # killprocess 55184 00:12:09.042 19:12:46 -- common/autotest_common.sh@924 -- # '[' -z 55184 ']' 00:12:09.042 19:12:46 -- common/autotest_common.sh@928 -- # kill -0 55184 00:12:09.042 19:12:46 -- common/autotest_common.sh@929 -- # uname 00:12:09.042 19:12:46 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:12:09.042 19:12:46 -- common/autotest_common.sh@932 -- # ps -c -o command 55184 00:12:09.042 19:12:46 -- common/autotest_common.sh@932 -- # tail -1 00:12:09.042 19:12:46 -- common/autotest_common.sh@932 -- # process_name=bdevio 00:12:09.042 killing process with pid 55184 00:12:09.042 19:12:46 -- common/autotest_common.sh@934 -- # '[' bdevio = sudo ']' 00:12:09.042 19:12:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 55184' 00:12:09.042 19:12:46 -- common/autotest_common.sh@943 -- # kill 55184 00:12:09.042 [2024-02-14 19:12:46.243102] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:09.042 19:12:46 -- common/autotest_common.sh@948 -- # wait 55184 00:12:09.042 19:12:46 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:09.042 00:12:09.042 real 0m1.297s 00:12:09.042 user 0m1.699s 00:12:09.042 sys 0m0.903s 00:12:09.042 ************************************ 00:12:09.042 END TEST bdev_bounds 00:12:09.042 ************************************ 00:12:09.042 19:12:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:09.042 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.042 19:12:46 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:12:09.042 19:12:46 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:12:09.042 19:12:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:09.042 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.042 ************************************ 00:12:09.042 START TEST bdev_nbd 00:12:09.042 ************************************ 00:12:09.042 19:12:46 -- common/autotest_common.sh@1102 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:12:09.042 19:12:46 -- bdev/blockdev.sh@298 -- # uname -s 00:12:09.042 19:12:46 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:12:09.042 19:12:46 -- bdev/blockdev.sh@298 -- # return 0 00:12:09.042 00:12:09.042 real 0m0.005s 00:12:09.042 user 0m0.001s 00:12:09.042 sys 0m0.002s 00:12:09.042 19:12:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:09.042 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.042 ************************************ 00:12:09.042 END TEST bdev_nbd 00:12:09.042 ************************************ 00:12:09.301 19:12:46 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:09.301 19:12:46 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:12:09.301 skipping fio tests on NVMe due to multi-ns failures. 00:12:09.301 19:12:46 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:12:09.301 19:12:46 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:09.301 19:12:46 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:09.301 19:12:46 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:12:09.301 19:12:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:09.301 19:12:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.301 ************************************ 00:12:09.301 START TEST bdev_verify 00:12:09.301 ************************************ 00:12:09.301 19:12:46 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:09.301 [2024-02-14 19:12:46.494464] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:09.301 [2024-02-14 19:12:46.494611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:09.869 EAL: TSC is not safe to use in SMP mode 00:12:09.869 EAL: TSC is not invariant 00:12:09.869 [2024-02-14 19:12:47.226749] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:10.127 [2024-02-14 19:12:47.305978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.127 [2024-02-14 19:12:47.306041] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:10.127 [2024-02-14 19:12:47.305965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.127 [2024-02-14 19:12:47.360753] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:10.127 Running I/O for 5 seconds... 00:12:15.440 00:12:15.440 Latency(us) 00:12:15.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.440 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:15.440 Verification LBA range: start 0x0 length 0xa0000 00:12:15.440 Nvme0n1 : 5.00 31156.85 121.71 0.00 0.00 4102.13 803.60 9611.94 00:12:15.440 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:15.440 Verification LBA range: start 0xa0000 length 0xa0000 00:12:15.440 Nvme0n1 : 5.01 31028.04 121.20 0.00 0.00 4119.63 401.80 9487.11 00:12:15.440 =================================================================================================================== 00:12:15.440 Total : 62184.90 242.91 0.00 0.00 4110.86 401.80 9611.94 00:12:15.440 [2024-02-14 19:12:52.437258] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:33.529 00:12:33.529 real 0m23.160s 00:12:33.529 user 0m44.510s 00:12:33.529 sys 0m0.807s 00:12:33.529 19:13:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:33.529 19:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:33.529 ************************************ 00:12:33.529 END TEST bdev_verify 00:12:33.529 ************************************ 00:12:33.529 19:13:09 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:33.529 19:13:09 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:12:33.529 19:13:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:33.529 19:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:33.529 ************************************ 00:12:33.529 START TEST bdev_verify_big_io 00:12:33.529 ************************************ 00:12:33.529 19:13:09 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:33.529 [2024-02-14 19:13:09.706257] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:33.529 [2024-02-14 19:13:09.706528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:33.529 EAL: TSC is not safe to use in SMP mode 00:12:33.529 EAL: TSC is not invariant 00:12:33.529 [2024-02-14 19:13:10.509900] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:33.529 [2024-02-14 19:13:10.632718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.529 [2024-02-14 19:13:10.632816] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:33.529 [2024-02-14 19:13:10.632712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.529 [2024-02-14 19:13:10.693027] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:33.529 Running I/O for 5 seconds... 00:12:38.797 00:12:38.797 Latency(us) 00:12:38.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.797 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:38.797 Verification LBA range: start 0x0 length 0xa000 00:12:38.797 Nvme0n1 : 5.01 15317.02 957.31 0.00 0.00 8305.25 108.74 25340.57 00:12:38.797 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:38.797 Verification LBA range: start 0xa000 length 0xa000 00:12:38.797 Nvme0n1 : 5.01 15401.37 962.59 0.00 0.00 8260.77 81.92 24466.76 00:12:38.797 =================================================================================================================== 00:12:38.797 Total : 30718.40 1919.90 0.00 0.00 8282.95 81.92 25340.57 00:12:38.797 [2024-02-14 19:13:15.777868] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:40.176 00:12:40.176 real 0m7.461s 00:12:40.176 user 0m12.906s 00:12:40.176 sys 0m0.905s 00:12:40.176 ************************************ 00:12:40.176 END TEST bdev_verify_big_io 00:12:40.176 ************************************ 00:12:40.176 19:13:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:40.176 19:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.176 19:13:17 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.176 19:13:17 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:40.176 19:13:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:40.176 19:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:40.176 ************************************ 00:12:40.176 START TEST bdev_write_zeroes 00:12:40.176 ************************************ 00:12:40.176 19:13:17 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:40.176 [2024-02-14 19:13:17.214796] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:40.176 [2024-02-14 19:13:17.215039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:40.744 EAL: TSC is not safe to use in SMP mode 00:12:40.744 EAL: TSC is not invariant 00:12:40.744 [2024-02-14 19:13:18.002382] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.744 [2024-02-14 19:13:18.133029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.744 [2024-02-14 19:13:18.133138] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:41.004 [2024-02-14 19:13:18.193259] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:41.004 Running I/O for 1 seconds... 00:12:41.959 00:12:41.959 Latency(us) 00:12:41.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.959 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:41.959 Nvme0n1 : 1.00 62324.49 243.46 0.00 0.00 2051.33 651.46 17850.75 00:12:41.959 =================================================================================================================== 00:12:41.959 Total : 62324.49 243.46 0.00 0.00 2051.33 651.46 17850.75 00:12:41.959 [2024-02-14 19:13:19.266830] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:42.225 00:12:42.225 real 0m2.300s 00:12:42.225 user 0m1.411s 00:12:42.225 sys 0m0.865s 00:12:42.225 19:13:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.225 ************************************ 00:12:42.225 END TEST bdev_write_zeroes 00:12:42.225 ************************************ 00:12:42.225 19:13:19 -- common/autotest_common.sh@10 -- # set +x 00:12:42.225 19:13:19 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:42.225 19:13:19 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:42.225 19:13:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:42.225 19:13:19 -- common/autotest_common.sh@10 -- # set +x 00:12:42.225 ************************************ 00:12:42.225 START TEST bdev_json_nonenclosed 00:12:42.225 ************************************ 00:12:42.225 19:13:19 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:42.225 [2024-02-14 19:13:19.559570] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:42.225 [2024-02-14 19:13:19.559917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:43.160 EAL: TSC is not safe to use in SMP mode 00:12:43.160 EAL: TSC is not invariant 00:12:43.160 [2024-02-14 19:13:20.296791] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.160 [2024-02-14 19:13:20.407936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.160 [2024-02-14 19:13:20.408029] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:43.160 [2024-02-14 19:13:20.408133] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:43.160 [2024-02-14 19:13:20.408149] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:43.160 [2024-02-14 19:13:20.408161] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:43.160 [2024-02-14 19:13:20.408181] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:43.160 00:12:43.160 real 0m1.000s 00:12:43.160 user 0m0.211s 00:12:43.160 sys 0m0.787s 00:12:43.160 19:13:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.160 19:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.160 ************************************ 00:12:43.160 END TEST bdev_json_nonenclosed 00:12:43.160 ************************************ 00:12:43.419 19:13:20 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.419 19:13:20 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:43.419 19:13:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:43.419 19:13:20 -- common/autotest_common.sh@10 -- # set +x 00:12:43.419 ************************************ 00:12:43.419 START TEST bdev_json_nonarray 00:12:43.419 ************************************ 00:12:43.419 19:13:20 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.419 [2024-02-14 19:13:20.604699] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:12:43.419 [2024-02-14 19:13:20.605054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:43.986 EAL: TSC is not safe to use in SMP mode 00:12:43.986 EAL: TSC is not invariant 00:12:43.986 [2024-02-14 19:13:21.341647] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.245 [2024-02-14 19:13:21.453041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.245 [2024-02-14 19:13:21.453132] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:44.245 [2024-02-14 19:13:21.453239] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:44.245 [2024-02-14 19:13:21.453253] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:44.245 [2024-02-14 19:13:21.453280] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:44.245 [2024-02-14 19:13:21.453301] app.c: 883:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:44.245 00:12:44.245 real 0m1.000s 00:12:44.245 user 0m0.189s 00:12:44.245 sys 0m0.794s 00:12:44.245 19:13:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.245 19:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:44.245 ************************************ 00:12:44.245 END TEST bdev_json_nonarray 00:12:44.245 ************************************ 00:12:44.245 19:13:21 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:12:44.245 19:13:21 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:12:44.245 19:13:21 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:12:44.245 19:13:21 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:12:44.245 19:13:21 -- bdev/blockdev.sh@809 -- # cleanup 00:12:44.245 19:13:21 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:12:44.245 19:13:21 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:44.245 19:13:21 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:12:44.245 19:13:21 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:12:44.245 19:13:21 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:12:44.245 19:13:21 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:12:44.245 00:12:44.245 real 0m39.468s 00:12:44.245 user 1m3.137s 00:12:44.245 sys 0m6.908s 00:12:44.245 19:13:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.245 19:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:44.245 ************************************ 00:12:44.245 END TEST blockdev_nvme 00:12:44.245 ************************************ 00:12:44.503 19:13:21 -- spdk/autotest.sh@219 -- # uname -s 00:12:44.503 19:13:21 -- spdk/autotest.sh@219 -- # [[ FreeBSD == Linux ]] 00:12:44.503 19:13:21 -- spdk/autotest.sh@222 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:44.503 19:13:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:44.503 19:13:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:44.503 19:13:21 -- common/autotest_common.sh@10 -- # set +x 00:12:44.503 ************************************ 00:12:44.503 START TEST nvme 00:12:44.503 ************************************ 00:12:44.503 19:13:21 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:12:44.503 * Looking for test storage... 00:12:44.503 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:12:44.503 19:13:21 -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:44.762 hw.nic_uio.bdfs="0:6:0" 00:12:44.762 19:13:22 -- nvme/nvme.sh@79 -- # uname 00:12:44.762 19:13:22 -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:12:44.762 19:13:22 -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:44.762 19:13:22 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:12:44.762 19:13:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:44.762 19:13:22 -- common/autotest_common.sh@10 -- # set +x 00:12:44.762 ************************************ 00:12:44.762 START TEST nvme_reset 00:12:44.762 ************************************ 00:12:44.762 19:13:22 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:12:45.699 EAL: TSC is not safe to use in SMP mode 00:12:45.699 EAL: TSC is not invariant 00:12:45.699 [2024-02-14 19:13:22.759187] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:45.699 Initializing NVMe Controllers 00:12:45.699 Skipping QEMU NVMe SSD at 0000:00:06.0 00:12:45.699 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:12:45.699 00:12:45.699 real 0m0.801s 00:12:45.699 user 0m0.015s 00:12:45.699 sys 0m0.785s 00:12:45.699 19:13:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:45.699 ************************************ 00:12:45.699 19:13:22 -- common/autotest_common.sh@10 -- # set +x 00:12:45.699 END TEST nvme_reset 00:12:45.699 ************************************ 00:12:45.699 19:13:22 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:12:45.699 19:13:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:45.699 19:13:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:45.699 19:13:22 -- common/autotest_common.sh@10 -- # set +x 00:12:45.699 ************************************ 00:12:45.699 START TEST nvme_identify 00:12:45.699 ************************************ 00:12:45.699 19:13:22 -- common/autotest_common.sh@1102 -- # nvme_identify 00:12:45.699 19:13:22 -- nvme/nvme.sh@12 -- # bdfs=() 00:12:45.699 19:13:22 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:12:45.699 19:13:22 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:12:45.699 19:13:22 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:12:45.699 19:13:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:45.699 19:13:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:12:45.699 19:13:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:45.699 19:13:22 -- common/autotest_common.sh@1497 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:45.699 19:13:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:45.699 19:13:22 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:12:45.699 19:13:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:12:45.699 19:13:22 -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:12:46.267 EAL: TSC is not safe to use in SMP mode 00:12:46.267 EAL: TSC is not invariant 00:12:46.267 [2024-02-14 19:13:23.676398] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:46.267 ===================================================== 00:12:46.267 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:46.267 ===================================================== 00:12:46.267 Controller Capabilities/Features 00:12:46.267 ================================ 00:12:46.267 Vendor ID: 1b36 00:12:46.267 Subsystem Vendor ID: 1af4 00:12:46.267 Serial Number: 12340 00:12:46.267 Model Number: QEMU NVMe Ctrl 00:12:46.268 Firmware Version: 8.0.0 00:12:46.268 Recommended Arb Burst: 6 00:12:46.268 IEEE OUI Identifier: 00 54 52 00:12:46.268 Multi-path I/O 00:12:46.268 May have multiple subsystem ports: No 00:12:46.268 May have multiple controllers: No 00:12:46.268 Associated with SR-IOV VF: No 00:12:46.268 Max Data Transfer Size: 524288 00:12:46.268 Max Number of Namespaces: 256 00:12:46.268 Max Number of I/O Queues: 64 00:12:46.268 NVMe Specification Version (VS): 1.4 00:12:46.268 NVMe Specification Version (Identify): 1.4 00:12:46.268 Maximum Queue Entries: 2048 00:12:46.268 Contiguous Queues Required: Yes 00:12:46.268 Arbitration Mechanisms Supported 00:12:46.268 Weighted Round Robin: Not Supported 00:12:46.268 Vendor Specific: Not Supported 00:12:46.268 Reset Timeout: 7500 ms 00:12:46.268 Doorbell Stride: 4 bytes 00:12:46.268 NVM Subsystem Reset: Not Supported 00:12:46.268 Command Sets Supported 00:12:46.268 NVM Command Set: Supported 00:12:46.268 Boot Partition: Not Supported 00:12:46.268 Memory Page Size Minimum: 4096 bytes 00:12:46.268 Memory Page Size Maximum: 65536 bytes 00:12:46.268 Persistent Memory Region: Not Supported 00:12:46.268 Optional Asynchronous Events Supported 00:12:46.268 Namespace Attribute Notices: Supported 00:12:46.268 Firmware Activation Notices: Not Supported 00:12:46.268 ANA Change Notices: Not Supported 00:12:46.268 PLE Aggregate Log Change Notices: Not Supported 00:12:46.268 LBA Status Info Alert Notices: Not Supported 00:12:46.268 EGE Aggregate Log Change Notices: Not Supported 00:12:46.268 Normal NVM Subsystem Shutdown event: Not Supported 00:12:46.268 Zone Descriptor Change Notices: Not Supported 00:12:46.268 Discovery Log Change Notices: Not Supported 00:12:46.268 Controller Attributes 00:12:46.268 128-bit Host Identifier: Not Supported 00:12:46.268 Non-Operational Permissive Mode: Not Supported 00:12:46.268 NVM Sets: Not Supported 00:12:46.268 Read Recovery Levels: Not Supported 00:12:46.268 Endurance Groups: Not Supported 00:12:46.268 Predictable Latency Mode: Not Supported 00:12:46.268 Traffic Based Keep ALive: Not Supported 00:12:46.268 Namespace Granularity: Not Supported 00:12:46.268 SQ Associations: Not Supported 00:12:46.268 UUID List: Not Supported 00:12:46.268 Multi-Domain Subsystem: Not Supported 00:12:46.268 Fixed Capacity Management: Not Supported 00:12:46.268 Variable Capacity Management: Not Supported 00:12:46.268 Delete Endurance Group: Not Supported 00:12:46.268 Delete NVM Set: Not Supported 00:12:46.268 Extended LBA Formats Supported: Supported 00:12:46.268 Flexible Data Placement Supported: Not Supported 00:12:46.268 00:12:46.268 Controller Memory Buffer Support 00:12:46.268 ================================ 00:12:46.268 Supported: No 00:12:46.268 00:12:46.268 Persistent Memory Region Support 00:12:46.268 ================================ 00:12:46.268 Supported: No 00:12:46.268 00:12:46.268 Admin Command Set Attributes 00:12:46.268 ============================ 00:12:46.268 Security Send/Receive: Not Supported 00:12:46.268 Format NVM: Supported 00:12:46.268 Firmware Activate/Download: Not Supported 00:12:46.268 Namespace Management: Supported 00:12:46.268 Device Self-Test: Not Supported 00:12:46.268 Directives: Supported 00:12:46.268 NVMe-MI: Not Supported 00:12:46.268 Virtualization Management: Not Supported 00:12:46.268 Doorbell Buffer Config: Supported 00:12:46.268 Get LBA Status Capability: Not Supported 00:12:46.268 Command & Feature Lockdown Capability: Not Supported 00:12:46.268 Abort Command Limit: 4 00:12:46.268 Async Event Request Limit: 4 00:12:46.268 Number of Firmware Slots: N/A 00:12:46.268 Firmware Slot 1 Read-Only: N/A 00:12:46.268 Firmware Activation Without Reset: N/A 00:12:46.268 Multiple Update Detection Support: N/A 00:12:46.268 Firmware Update Granularity: No Information Provided 00:12:46.268 Per-Namespace SMART Log: Yes 00:12:46.268 Asymmetric Namespace Access Log Page: Not Supported 00:12:46.268 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:46.268 Command Effects Log Page: Supported 00:12:46.268 Get Log Page Extended Data: Supported 00:12:46.268 Telemetry Log Pages: Not Supported 00:12:46.268 Persistent Event Log Pages: Not Supported 00:12:46.268 Supported Log Pages Log Page: May Support 00:12:46.268 Commands Supported & Effects Log Page: Not Supported 00:12:46.268 Feature Identifiers & Effects Log Page:May Support 00:12:46.268 NVMe-MI Commands & Effects Log Page: May Support 00:12:46.268 Data Area 4 for Telemetry Log: Not Supported 00:12:46.268 Error Log Page Entries Supported: 1 00:12:46.268 Keep Alive: Not Supported 00:12:46.268 00:12:46.268 NVM Command Set Attributes 00:12:46.268 ========================== 00:12:46.268 Submission Queue Entry Size 00:12:46.268 Max: 64 00:12:46.268 Min: 64 00:12:46.268 Completion Queue Entry Size 00:12:46.268 Max: 16 00:12:46.268 Min: 16 00:12:46.268 Number of Namespaces: 256 00:12:46.268 Compare Command: Supported 00:12:46.268 Write Uncorrectable Command: Not Supported 00:12:46.268 Dataset Management Command: Supported 00:12:46.268 Write Zeroes Command: Supported 00:12:46.268 Set Features Save Field: Supported 00:12:46.268 Reservations: Not Supported 00:12:46.268 Timestamp: Supported 00:12:46.268 Copy: Supported 00:12:46.268 Volatile Write Cache: Present 00:12:46.268 Atomic Write Unit (Normal): 1 00:12:46.268 Atomic Write Unit (PFail): 1 00:12:46.268 Atomic Compare & Write Unit: 1 00:12:46.268 Fused Compare & Write: Not Supported 00:12:46.268 Scatter-Gather List 00:12:46.268 SGL Command Set: Supported 00:12:46.268 SGL Keyed: Not Supported 00:12:46.268 SGL Bit Bucket Descriptor: Not Supported 00:12:46.268 SGL Metadata Pointer: Not Supported 00:12:46.268 Oversized SGL: Not Supported 00:12:46.268 SGL Metadata Address: Not Supported 00:12:46.268 SGL Offset: Not Supported 00:12:46.268 Transport SGL Data Block: Not Supported 00:12:46.268 Replay Protected Memory Block: Not Supported 00:12:46.268 00:12:46.268 Firmware Slot Information 00:12:46.268 ========================= 00:12:46.268 Active slot: 1 00:12:46.268 Slot 1 Firmware Revision: 1.0 00:12:46.268 00:12:46.268 00:12:46.268 Commands Supported and Effects 00:12:46.268 ============================== 00:12:46.268 Admin Commands 00:12:46.268 -------------- 00:12:46.268 Delete I/O Submission Queue (00h): Supported 00:12:46.268 Create I/O Submission Queue (01h): Supported 00:12:46.268 Get Log Page (02h): Supported 00:12:46.268 Delete I/O Completion Queue (04h): Supported 00:12:46.268 Create I/O Completion Queue (05h): Supported 00:12:46.268 Identify (06h): Supported 00:12:46.268 Abort (08h): Supported 00:12:46.268 Set Features (09h): Supported 00:12:46.268 Get Features (0Ah): Supported 00:12:46.268 Asynchronous Event Request (0Ch): Supported 00:12:46.268 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:46.268 Directive Send (19h): Supported 00:12:46.268 Directive Receive (1Ah): Supported 00:12:46.268 Virtualization Management (1Ch): Supported 00:12:46.268 Doorbell Buffer Config (7Ch): Supported 00:12:46.268 Format NVM (80h): Supported LBA-Change 00:12:46.268 I/O Commands 00:12:46.268 ------------ 00:12:46.268 Flush (00h): Supported LBA-Change 00:12:46.268 Write (01h): Supported LBA-Change 00:12:46.268 Read (02h): Supported 00:12:46.268 Compare (05h): Supported 00:12:46.268 Write Zeroes (08h): Supported LBA-Change 00:12:46.268 Dataset Management (09h): Supported LBA-Change 00:12:46.268 Unknown (0Ch): Supported 00:12:46.268 Unknown (12h): Supported 00:12:46.268 Copy (19h): Supported LBA-Change 00:12:46.268 Unknown (1Dh): Supported LBA-Change 00:12:46.268 00:12:46.268 Error Log 00:12:46.268 ========= 00:12:46.268 00:12:46.268 Arbitration 00:12:46.268 =========== 00:12:46.268 Arbitration Burst: no limit 00:12:46.268 00:12:46.268 Power Management 00:12:46.268 ================ 00:12:46.268 Number of Power States: 1 00:12:46.268 Current Power State: Power State #0 00:12:46.268 Power State #0: 00:12:46.268 Max Power: 25.00 W 00:12:46.268 Non-Operational State: Operational 00:12:46.268 Entry Latency: 16 microseconds 00:12:46.268 Exit Latency: 4 microseconds 00:12:46.268 Relative Read Throughput: 0 00:12:46.268 Relative Read Latency: 0 00:12:46.268 Relative Write Throughput: 0 00:12:46.268 Relative Write Latency: 0 00:12:46.528 Idle Power: Not Reported 00:12:46.528 Active Power: Not Reported 00:12:46.528 Non-Operational Permissive Mode: Not Supported 00:12:46.528 00:12:46.528 Health Information 00:12:46.528 ================== 00:12:46.528 Critical Warnings: 00:12:46.528 Available Spare Space: OK 00:12:46.528 Temperature: OK 00:12:46.528 Device Reliability: OK 00:12:46.528 Read Only: No 00:12:46.528 Volatile Memory Backup: OK 00:12:46.528 Current Temperature: 323 Kelvin (50 Celsius) 00:12:46.528 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:46.528 Available Spare: 0% 00:12:46.528 Available Spare Threshold: 0% 00:12:46.528 Life Percentage Used: 0% 00:12:46.528 Data Units Read: 22205 00:12:46.528 Data Units Written: 11166 00:12:46.528 Host Read Commands: 465305 00:12:46.528 Host Write Commands: 233266 00:12:46.528 Controller Busy Time: 0 minutes 00:12:46.528 Power Cycles: 0 00:12:46.528 Power On Hours: 0 hours 00:12:46.528 Unsafe Shutdowns: 0 00:12:46.528 Unrecoverable Media Errors: 0 00:12:46.528 Lifetime Error Log Entries: 0 00:12:46.528 Warning Temperature Time: 0 minutes 00:12:46.528 Critical Temperature Time: 0 minutes 00:12:46.528 00:12:46.528 Number of Queues 00:12:46.528 ================ 00:12:46.528 Number of I/O Submission Queues: 64 00:12:46.528 Number of I/O Completion Queues: 64 00:12:46.528 00:12:46.528 ZNS Specific Controller Data 00:12:46.528 ============================ 00:12:46.528 Zone Append Size Limit: 0 00:12:46.528 00:12:46.528 00:12:46.528 Active Namespaces 00:12:46.528 ================= 00:12:46.528 Namespace ID:1 00:12:46.528 Error Recovery Timeout: Unlimited 00:12:46.528 Command Set Identifier: NVM (00h) 00:12:46.528 Deallocate: Supported 00:12:46.528 Deallocated/Unwritten Error: Supported 00:12:46.528 Deallocated Read Value: All 0x00 00:12:46.528 Deallocate in Write Zeroes: Not Supported 00:12:46.528 Deallocated Guard Field: 0xFFFF 00:12:46.528 Flush: Supported 00:12:46.528 Reservation: Not Supported 00:12:46.528 Namespace Sharing Capabilities: Private 00:12:46.528 Size (in LBAs): 1310720 (5GiB) 00:12:46.528 Capacity (in LBAs): 1310720 (5GiB) 00:12:46.528 Utilization (in LBAs): 1310720 (5GiB) 00:12:46.528 Thin Provisioning: Not Supported 00:12:46.528 Per-NS Atomic Units: No 00:12:46.528 Maximum Single Source Range Length: 128 00:12:46.528 Maximum Copy Length: 128 00:12:46.528 Maximum Source Range Count: 128 00:12:46.528 NGUID/EUI64 Never Reused: No 00:12:46.528 Namespace Write Protected: No 00:12:46.528 Number of LBA Formats: 8 00:12:46.528 Current LBA Format: LBA Format #04 00:12:46.528 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:46.528 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:46.528 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:46.528 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:46.528 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:46.528 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:46.528 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:46.528 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:46.528 00:12:46.528 19:13:23 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:12:46.528 19:13:23 -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:12:47.096 EAL: TSC is not safe to use in SMP mode 00:12:47.097 EAL: TSC is not invariant 00:12:47.097 [2024-02-14 19:13:24.512386] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:47.356 ===================================================== 00:12:47.356 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:47.356 ===================================================== 00:12:47.356 Controller Capabilities/Features 00:12:47.356 ================================ 00:12:47.356 Vendor ID: 1b36 00:12:47.356 Subsystem Vendor ID: 1af4 00:12:47.356 Serial Number: 12340 00:12:47.356 Model Number: QEMU NVMe Ctrl 00:12:47.356 Firmware Version: 8.0.0 00:12:47.356 Recommended Arb Burst: 6 00:12:47.356 IEEE OUI Identifier: 00 54 52 00:12:47.356 Multi-path I/O 00:12:47.356 May have multiple subsystem ports: No 00:12:47.356 May have multiple controllers: No 00:12:47.356 Associated with SR-IOV VF: No 00:12:47.356 Max Data Transfer Size: 524288 00:12:47.356 Max Number of Namespaces: 256 00:12:47.356 Max Number of I/O Queues: 64 00:12:47.356 NVMe Specification Version (VS): 1.4 00:12:47.356 NVMe Specification Version (Identify): 1.4 00:12:47.356 Maximum Queue Entries: 2048 00:12:47.356 Contiguous Queues Required: Yes 00:12:47.356 Arbitration Mechanisms Supported 00:12:47.356 Weighted Round Robin: Not Supported 00:12:47.356 Vendor Specific: Not Supported 00:12:47.356 Reset Timeout: 7500 ms 00:12:47.356 Doorbell Stride: 4 bytes 00:12:47.356 NVM Subsystem Reset: Not Supported 00:12:47.356 Command Sets Supported 00:12:47.356 NVM Command Set: Supported 00:12:47.356 Boot Partition: Not Supported 00:12:47.356 Memory Page Size Minimum: 4096 bytes 00:12:47.356 Memory Page Size Maximum: 65536 bytes 00:12:47.356 Persistent Memory Region: Not Supported 00:12:47.356 Optional Asynchronous Events Supported 00:12:47.356 Namespace Attribute Notices: Supported 00:12:47.356 Firmware Activation Notices: Not Supported 00:12:47.356 ANA Change Notices: Not Supported 00:12:47.356 PLE Aggregate Log Change Notices: Not Supported 00:12:47.356 LBA Status Info Alert Notices: Not Supported 00:12:47.356 EGE Aggregate Log Change Notices: Not Supported 00:12:47.357 Normal NVM Subsystem Shutdown event: Not Supported 00:12:47.357 Zone Descriptor Change Notices: Not Supported 00:12:47.357 Discovery Log Change Notices: Not Supported 00:12:47.357 Controller Attributes 00:12:47.357 128-bit Host Identifier: Not Supported 00:12:47.357 Non-Operational Permissive Mode: Not Supported 00:12:47.357 NVM Sets: Not Supported 00:12:47.357 Read Recovery Levels: Not Supported 00:12:47.357 Endurance Groups: Not Supported 00:12:47.357 Predictable Latency Mode: Not Supported 00:12:47.357 Traffic Based Keep ALive: Not Supported 00:12:47.357 Namespace Granularity: Not Supported 00:12:47.357 SQ Associations: Not Supported 00:12:47.357 UUID List: Not Supported 00:12:47.357 Multi-Domain Subsystem: Not Supported 00:12:47.357 Fixed Capacity Management: Not Supported 00:12:47.357 Variable Capacity Management: Not Supported 00:12:47.357 Delete Endurance Group: Not Supported 00:12:47.357 Delete NVM Set: Not Supported 00:12:47.357 Extended LBA Formats Supported: Supported 00:12:47.357 Flexible Data Placement Supported: Not Supported 00:12:47.357 00:12:47.357 Controller Memory Buffer Support 00:12:47.357 ================================ 00:12:47.357 Supported: No 00:12:47.357 00:12:47.357 Persistent Memory Region Support 00:12:47.357 ================================ 00:12:47.357 Supported: No 00:12:47.357 00:12:47.357 Admin Command Set Attributes 00:12:47.357 ============================ 00:12:47.357 Security Send/Receive: Not Supported 00:12:47.357 Format NVM: Supported 00:12:47.357 Firmware Activate/Download: Not Supported 00:12:47.357 Namespace Management: Supported 00:12:47.357 Device Self-Test: Not Supported 00:12:47.357 Directives: Supported 00:12:47.357 NVMe-MI: Not Supported 00:12:47.357 Virtualization Management: Not Supported 00:12:47.357 Doorbell Buffer Config: Supported 00:12:47.357 Get LBA Status Capability: Not Supported 00:12:47.357 Command & Feature Lockdown Capability: Not Supported 00:12:47.357 Abort Command Limit: 4 00:12:47.357 Async Event Request Limit: 4 00:12:47.357 Number of Firmware Slots: N/A 00:12:47.357 Firmware Slot 1 Read-Only: N/A 00:12:47.357 Firmware Activation Without Reset: N/A 00:12:47.357 Multiple Update Detection Support: N/A 00:12:47.357 Firmware Update Granularity: No Information Provided 00:12:47.357 Per-Namespace SMART Log: Yes 00:12:47.357 Asymmetric Namespace Access Log Page: Not Supported 00:12:47.357 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:12:47.357 Command Effects Log Page: Supported 00:12:47.357 Get Log Page Extended Data: Supported 00:12:47.357 Telemetry Log Pages: Not Supported 00:12:47.357 Persistent Event Log Pages: Not Supported 00:12:47.357 Supported Log Pages Log Page: May Support 00:12:47.357 Commands Supported & Effects Log Page: Not Supported 00:12:47.357 Feature Identifiers & Effects Log Page:May Support 00:12:47.357 NVMe-MI Commands & Effects Log Page: May Support 00:12:47.357 Data Area 4 for Telemetry Log: Not Supported 00:12:47.357 Error Log Page Entries Supported: 1 00:12:47.357 Keep Alive: Not Supported 00:12:47.357 00:12:47.357 NVM Command Set Attributes 00:12:47.357 ========================== 00:12:47.357 Submission Queue Entry Size 00:12:47.357 Max: 64 00:12:47.357 Min: 64 00:12:47.357 Completion Queue Entry Size 00:12:47.357 Max: 16 00:12:47.357 Min: 16 00:12:47.357 Number of Namespaces: 256 00:12:47.357 Compare Command: Supported 00:12:47.357 Write Uncorrectable Command: Not Supported 00:12:47.357 Dataset Management Command: Supported 00:12:47.357 Write Zeroes Command: Supported 00:12:47.357 Set Features Save Field: Supported 00:12:47.357 Reservations: Not Supported 00:12:47.357 Timestamp: Supported 00:12:47.357 Copy: Supported 00:12:47.357 Volatile Write Cache: Present 00:12:47.357 Atomic Write Unit (Normal): 1 00:12:47.357 Atomic Write Unit (PFail): 1 00:12:47.357 Atomic Compare & Write Unit: 1 00:12:47.357 Fused Compare & Write: Not Supported 00:12:47.357 Scatter-Gather List 00:12:47.357 SGL Command Set: Supported 00:12:47.357 SGL Keyed: Not Supported 00:12:47.357 SGL Bit Bucket Descriptor: Not Supported 00:12:47.357 SGL Metadata Pointer: Not Supported 00:12:47.357 Oversized SGL: Not Supported 00:12:47.357 SGL Metadata Address: Not Supported 00:12:47.357 SGL Offset: Not Supported 00:12:47.357 Transport SGL Data Block: Not Supported 00:12:47.357 Replay Protected Memory Block: Not Supported 00:12:47.357 00:12:47.357 Firmware Slot Information 00:12:47.357 ========================= 00:12:47.357 Active slot: 1 00:12:47.357 Slot 1 Firmware Revision: 1.0 00:12:47.357 00:12:47.357 00:12:47.357 Commands Supported and Effects 00:12:47.357 ============================== 00:12:47.357 Admin Commands 00:12:47.357 -------------- 00:12:47.357 Delete I/O Submission Queue (00h): Supported 00:12:47.357 Create I/O Submission Queue (01h): Supported 00:12:47.357 Get Log Page (02h): Supported 00:12:47.357 Delete I/O Completion Queue (04h): Supported 00:12:47.357 Create I/O Completion Queue (05h): Supported 00:12:47.357 Identify (06h): Supported 00:12:47.357 Abort (08h): Supported 00:12:47.357 Set Features (09h): Supported 00:12:47.357 Get Features (0Ah): Supported 00:12:47.357 Asynchronous Event Request (0Ch): Supported 00:12:47.357 Namespace Attachment (15h): Supported NS-Inventory-Change 00:12:47.357 Directive Send (19h): Supported 00:12:47.357 Directive Receive (1Ah): Supported 00:12:47.357 Virtualization Management (1Ch): Supported 00:12:47.357 Doorbell Buffer Config (7Ch): Supported 00:12:47.357 Format NVM (80h): Supported LBA-Change 00:12:47.357 I/O Commands 00:12:47.357 ------------ 00:12:47.357 Flush (00h): Supported LBA-Change 00:12:47.357 Write (01h): Supported LBA-Change 00:12:47.357 Read (02h): Supported 00:12:47.357 Compare (05h): Supported 00:12:47.357 Write Zeroes (08h): Supported LBA-Change 00:12:47.357 Dataset Management (09h): Supported LBA-Change 00:12:47.357 Unknown (0Ch): Supported 00:12:47.357 Unknown (12h): Supported 00:12:47.357 Copy (19h): Supported LBA-Change 00:12:47.357 Unknown (1Dh): Supported LBA-Change 00:12:47.357 00:12:47.357 Error Log 00:12:47.357 ========= 00:12:47.357 00:12:47.357 Arbitration 00:12:47.357 =========== 00:12:47.357 Arbitration Burst: no limit 00:12:47.357 00:12:47.357 Power Management 00:12:47.357 ================ 00:12:47.357 Number of Power States: 1 00:12:47.357 Current Power State: Power State #0 00:12:47.357 Power State #0: 00:12:47.357 Max Power: 25.00 W 00:12:47.357 Non-Operational State: Operational 00:12:47.357 Entry Latency: 16 microseconds 00:12:47.357 Exit Latency: 4 microseconds 00:12:47.357 Relative Read Throughput: 0 00:12:47.357 Relative Read Latency: 0 00:12:47.357 Relative Write Throughput: 0 00:12:47.357 Relative Write Latency: 0 00:12:47.357 Idle Power: Not Reported 00:12:47.357 Active Power: Not Reported 00:12:47.357 Non-Operational Permissive Mode: Not Supported 00:12:47.357 00:12:47.357 Health Information 00:12:47.357 ================== 00:12:47.357 Critical Warnings: 00:12:47.357 Available Spare Space: OK 00:12:47.357 Temperature: OK 00:12:47.357 Device Reliability: OK 00:12:47.357 Read Only: No 00:12:47.357 Volatile Memory Backup: OK 00:12:47.357 Current Temperature: 323 Kelvin (50 Celsius) 00:12:47.357 Temperature Threshold: 343 Kelvin (70 Celsius) 00:12:47.357 Available Spare: 0% 00:12:47.357 Available Spare Threshold: 0% 00:12:47.357 Life Percentage Used: 0% 00:12:47.357 Data Units Read: 22205 00:12:47.357 Data Units Written: 11166 00:12:47.357 Host Read Commands: 465305 00:12:47.357 Host Write Commands: 233266 00:12:47.357 Controller Busy Time: 0 minutes 00:12:47.357 Power Cycles: 0 00:12:47.357 Power On Hours: 0 hours 00:12:47.357 Unsafe Shutdowns: 0 00:12:47.357 Unrecoverable Media Errors: 0 00:12:47.357 Lifetime Error Log Entries: 0 00:12:47.357 Warning Temperature Time: 0 minutes 00:12:47.357 Critical Temperature Time: 0 minutes 00:12:47.357 00:12:47.357 Number of Queues 00:12:47.357 ================ 00:12:47.357 Number of I/O Submission Queues: 64 00:12:47.357 Number of I/O Completion Queues: 64 00:12:47.357 00:12:47.357 ZNS Specific Controller Data 00:12:47.357 ============================ 00:12:47.357 Zone Append Size Limit: 0 00:12:47.357 00:12:47.357 00:12:47.357 Active Namespaces 00:12:47.357 ================= 00:12:47.357 Namespace ID:1 00:12:47.358 Error Recovery Timeout: Unlimited 00:12:47.358 Command Set Identifier: NVM (00h) 00:12:47.358 Deallocate: Supported 00:12:47.358 Deallocated/Unwritten Error: Supported 00:12:47.358 Deallocated Read Value: All 0x00 00:12:47.358 Deallocate in Write Zeroes: Not Supported 00:12:47.358 Deallocated Guard Field: 0xFFFF 00:12:47.358 Flush: Supported 00:12:47.358 Reservation: Not Supported 00:12:47.358 Namespace Sharing Capabilities: Private 00:12:47.358 Size (in LBAs): 1310720 (5GiB) 00:12:47.358 Capacity (in LBAs): 1310720 (5GiB) 00:12:47.358 Utilization (in LBAs): 1310720 (5GiB) 00:12:47.358 Thin Provisioning: Not Supported 00:12:47.358 Per-NS Atomic Units: No 00:12:47.358 Maximum Single Source Range Length: 128 00:12:47.358 Maximum Copy Length: 128 00:12:47.358 Maximum Source Range Count: 128 00:12:47.358 NGUID/EUI64 Never Reused: No 00:12:47.358 Namespace Write Protected: No 00:12:47.358 Number of LBA Formats: 8 00:12:47.358 Current LBA Format: LBA Format #04 00:12:47.358 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:47.358 LBA Format #01: Data Size: 512 Metadata Size: 8 00:12:47.358 LBA Format #02: Data Size: 512 Metadata Size: 16 00:12:47.358 LBA Format #03: Data Size: 512 Metadata Size: 64 00:12:47.358 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:12:47.358 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:12:47.358 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:12:47.358 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:12:47.358 00:12:47.358 00:12:47.358 real 0m1.728s 00:12:47.358 user 0m0.049s 00:12:47.358 sys 0m1.694s 00:12:47.358 19:13:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.358 19:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:47.358 ************************************ 00:12:47.358 END TEST nvme_identify 00:12:47.358 ************************************ 00:12:47.358 19:13:24 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:12:47.358 19:13:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:47.358 19:13:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:47.358 19:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:47.358 ************************************ 00:12:47.358 START TEST nvme_perf 00:12:47.358 ************************************ 00:12:47.358 19:13:24 -- common/autotest_common.sh@1102 -- # nvme_perf 00:12:47.358 19:13:24 -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:12:48.294 EAL: TSC is not safe to use in SMP mode 00:12:48.294 EAL: TSC is not invariant 00:12:48.294 [2024-02-14 19:13:25.403198] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:49.265 Initializing NVMe Controllers 00:12:49.265 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:49.265 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:49.265 Initialization complete. Launching workers. 00:12:49.265 ======================================================== 00:12:49.265 Latency(us) 00:12:49.265 Device Information : IOPS MiB/s Average min max 00:12:49.265 PCIE (0000:00:06.0) NSID 1 from core 0: 90285.00 1058.03 1417.73 140.30 6091.26 00:12:49.265 ======================================================== 00:12:49.265 Total : 90285.00 1058.03 1417.73 140.30 6091.26 00:12:49.265 00:12:49.265 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:49.265 ================================================================================= 00:12:49.265 1.00000% : 1240.502us 00:12:49.265 10.00000% : 1287.313us 00:12:49.265 25.00000% : 1326.323us 00:12:49.265 50.00000% : 1380.936us 00:12:49.265 75.00000% : 1451.153us 00:12:49.265 90.00000% : 1568.182us 00:12:49.265 95.00000% : 1693.012us 00:12:49.265 98.00000% : 1872.456us 00:12:49.265 99.00000% : 2028.494us 00:12:49.265 99.50000% : 2605.835us 00:12:49.265 99.90000% : 5554.953us 00:12:49.265 99.99000% : 6023.067us 00:12:49.265 99.99900% : 6116.689us 00:12:49.265 99.99990% : 6116.689us 00:12:49.265 99.99999% : 6116.689us 00:12:49.265 00:12:49.265 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:49.265 ============================================================================== 00:12:49.265 Range in us Cumulative IO count 00:12:49.265 139.459 - 140.434: 0.0011% ( 1) 00:12:49.265 140.434 - 141.409: 0.0044% ( 3) 00:12:49.265 141.409 - 142.385: 0.0055% ( 1) 00:12:49.265 787.992 - 791.893: 0.0066% ( 1) 00:12:49.265 799.695 - 803.596: 0.0111% ( 4) 00:12:49.265 803.596 - 807.497: 0.0133% ( 2) 00:12:49.265 807.497 - 811.398: 0.0199% ( 6) 00:12:49.265 811.398 - 815.299: 0.0255% ( 5) 00:12:49.265 815.299 - 819.199: 0.0332% ( 7) 00:12:49.265 819.199 - 823.100: 0.0377% ( 4) 00:12:49.265 823.100 - 827.001: 0.0410% ( 3) 00:12:49.265 827.001 - 830.902: 0.0432% ( 2) 00:12:49.265 830.902 - 834.803: 0.0454% ( 2) 00:12:49.265 834.803 - 838.704: 0.0476% ( 2) 00:12:49.265 838.704 - 842.605: 0.0487% ( 1) 00:12:49.265 842.605 - 846.506: 0.0498% ( 1) 00:12:49.265 1170.285 - 1178.087: 0.0532% ( 3) 00:12:49.265 1178.087 - 1185.889: 0.0554% ( 2) 00:12:49.265 1185.889 - 1193.691: 0.0576% ( 2) 00:12:49.265 1193.691 - 1201.493: 0.0676% ( 9) 00:12:49.265 1201.493 - 1209.294: 0.0930% ( 23) 00:12:49.265 1209.294 - 1217.096: 0.1617% ( 62) 00:12:49.265 1217.096 - 1224.898: 0.3157% ( 139) 00:12:49.265 1224.898 - 1232.700: 0.5981% ( 255) 00:12:49.265 1232.700 - 1240.502: 1.0722% ( 428) 00:12:49.265 1240.502 - 1248.304: 1.7644% ( 625) 00:12:49.265 1248.304 - 1256.106: 2.7258% ( 868) 00:12:49.265 1256.106 - 1263.908: 4.0306% ( 1178) 00:12:49.265 1263.908 - 1271.710: 5.7606% ( 1562) 00:12:49.265 1271.710 - 1279.512: 7.8895% ( 1922) 00:12:49.265 1279.512 - 1287.313: 10.4148% ( 2280) 00:12:49.265 1287.313 - 1295.115: 13.3134% ( 2617) 00:12:49.265 1295.115 - 1302.917: 16.4645% ( 2845) 00:12:49.265 1302.917 - 1310.719: 19.8017% ( 3013) 00:12:49.265 1310.719 - 1318.521: 23.2442% ( 3108) 00:12:49.265 1318.521 - 1326.323: 26.7652% ( 3179) 00:12:49.265 1326.323 - 1334.125: 30.3506% ( 3237) 00:12:49.265 1334.125 - 1341.927: 34.0023% ( 3297) 00:12:49.265 1341.927 - 1349.729: 37.6242% ( 3270) 00:12:49.265 1349.729 - 1357.531: 41.2161% ( 3243) 00:12:49.265 1357.531 - 1365.332: 44.8004% ( 3236) 00:12:49.265 1365.332 - 1373.134: 48.3746% ( 3227) 00:12:49.265 1373.134 - 1380.936: 51.8203% ( 3111) 00:12:49.265 1380.936 - 1388.738: 55.1963% ( 3048) 00:12:49.265 1388.738 - 1396.540: 58.4327% ( 2922) 00:12:49.265 1396.540 - 1404.342: 61.4742% ( 2746) 00:12:49.265 1404.342 - 1412.144: 64.3152% ( 2565) 00:12:49.265 1412.144 - 1419.946: 67.0023% ( 2426) 00:12:49.265 1419.946 - 1427.748: 69.5077% ( 2262) 00:12:49.265 1427.748 - 1435.550: 71.8026% ( 2072) 00:12:49.265 1435.550 - 1443.351: 73.8672% ( 1864) 00:12:49.265 1443.351 - 1451.153: 75.6981% ( 1653) 00:12:49.265 1451.153 - 1458.955: 77.4027% ( 1539) 00:12:49.265 1458.955 - 1466.757: 78.9289% ( 1378) 00:12:49.265 1466.757 - 1474.559: 80.3168% ( 1253) 00:12:49.265 1474.559 - 1482.361: 81.6182% ( 1175) 00:12:49.265 1482.361 - 1490.163: 82.8355% ( 1099) 00:12:49.265 1490.163 - 1497.965: 83.9807% ( 1034) 00:12:49.265 1497.965 - 1505.767: 85.0141% ( 933) 00:12:49.265 1505.767 - 1513.569: 85.9368% ( 833) 00:12:49.265 1513.569 - 1521.370: 86.7819% ( 763) 00:12:49.265 1521.370 - 1529.172: 87.5361% ( 681) 00:12:49.265 1529.172 - 1536.974: 88.2483% ( 643) 00:12:49.265 1536.974 - 1544.776: 88.8719% ( 563) 00:12:49.265 1544.776 - 1552.578: 89.4312% ( 505) 00:12:49.265 1552.578 - 1560.380: 89.9585% ( 476) 00:12:49.265 1560.380 - 1568.182: 90.4281% ( 424) 00:12:49.265 1568.182 - 1575.984: 90.8545% ( 385) 00:12:49.265 1575.984 - 1583.786: 91.2632% ( 369) 00:12:49.265 1583.786 - 1591.588: 91.6165% ( 319) 00:12:49.265 1591.588 - 1599.389: 91.9621% ( 312) 00:12:49.265 1599.389 - 1607.191: 92.2623% ( 271) 00:12:49.265 1607.191 - 1614.993: 92.5580% ( 267) 00:12:49.265 1614.993 - 1622.795: 92.8438% ( 258) 00:12:49.265 1622.795 - 1630.597: 93.1317% ( 260) 00:12:49.265 1630.597 - 1638.399: 93.4208% ( 261) 00:12:49.265 1638.399 - 1646.201: 93.6988% ( 251) 00:12:49.265 1646.201 - 1654.003: 93.9469% ( 224) 00:12:49.265 1654.003 - 1661.805: 94.1962% ( 225) 00:12:49.265 1661.805 - 1669.607: 94.4542% ( 233) 00:12:49.265 1669.607 - 1677.408: 94.7101% ( 231) 00:12:49.265 1677.408 - 1685.210: 94.9526% ( 219) 00:12:49.265 1685.210 - 1693.012: 95.1830% ( 208) 00:12:49.265 1693.012 - 1700.814: 95.4034% ( 199) 00:12:49.265 1700.814 - 1708.616: 95.6128% ( 189) 00:12:49.265 1708.616 - 1716.418: 95.8144% ( 182) 00:12:49.265 1716.418 - 1724.220: 96.0182% ( 184) 00:12:49.265 1724.220 - 1732.022: 96.2065% ( 170) 00:12:49.265 1732.022 - 1739.824: 96.3881% ( 164) 00:12:49.265 1739.824 - 1747.626: 96.5487% ( 145) 00:12:49.265 1747.626 - 1755.427: 96.6949% ( 132) 00:12:49.265 1755.427 - 1763.229: 96.8367% ( 128) 00:12:49.265 1763.229 - 1771.031: 96.9696% ( 120) 00:12:49.265 1771.031 - 1778.833: 97.0903% ( 109) 00:12:49.265 1778.833 - 1786.635: 97.2022% ( 101) 00:12:49.265 1786.635 - 1794.437: 97.3074% ( 95) 00:12:49.265 1794.437 - 1802.239: 97.3949% ( 79) 00:12:49.265 1802.239 - 1810.041: 97.4747% ( 72) 00:12:49.265 1810.041 - 1817.843: 97.5544% ( 72) 00:12:49.266 1817.843 - 1825.645: 97.6231% ( 62) 00:12:49.266 1825.645 - 1833.446: 97.6973% ( 67) 00:12:49.266 1833.446 - 1841.248: 97.7660% ( 62) 00:12:49.266 1841.248 - 1849.050: 97.8324% ( 60) 00:12:49.266 1849.050 - 1856.852: 97.8989% ( 60) 00:12:49.266 1856.852 - 1864.654: 97.9653% ( 60) 00:12:49.266 1864.654 - 1872.456: 98.0351% ( 63) 00:12:49.266 1872.456 - 1880.258: 98.0982% ( 57) 00:12:49.266 1880.258 - 1888.060: 98.1603% ( 56) 00:12:49.266 1888.060 - 1895.862: 98.2256% ( 59) 00:12:49.266 1895.862 - 1903.664: 98.2854% ( 54) 00:12:49.266 1903.664 - 1911.465: 98.3497% ( 58) 00:12:49.266 1911.465 - 1919.267: 98.4117% ( 56) 00:12:49.266 1919.267 - 1927.069: 98.4715% ( 54) 00:12:49.266 1927.069 - 1934.871: 98.5391% ( 61) 00:12:49.266 1934.871 - 1942.673: 98.6033% ( 58) 00:12:49.266 1942.673 - 1950.475: 98.6609% ( 52) 00:12:49.266 1950.475 - 1958.277: 98.7119% ( 46) 00:12:49.266 1958.277 - 1966.079: 98.7584% ( 42) 00:12:49.266 1966.079 - 1973.881: 98.8016% ( 39) 00:12:49.266 1973.881 - 1981.683: 98.8470% ( 41) 00:12:49.266 1981.683 - 1989.484: 98.8846% ( 34) 00:12:49.266 1989.484 - 1997.286: 98.9234% ( 35) 00:12:49.266 1997.286 - 2012.890: 98.9899% ( 60) 00:12:49.266 2012.890 - 2028.494: 99.0486% ( 53) 00:12:49.266 2028.494 - 2044.098: 99.1128% ( 58) 00:12:49.266 2044.098 - 2059.702: 99.1748% ( 56) 00:12:49.266 2059.702 - 2075.305: 99.2225% ( 43) 00:12:49.266 2075.305 - 2090.909: 99.2579% ( 32) 00:12:49.266 2090.909 - 2106.513: 99.2801% ( 20) 00:12:49.266 2106.513 - 2122.117: 99.2900% ( 9) 00:12:49.266 2122.117 - 2137.721: 99.2945% ( 4) 00:12:49.266 2137.721 - 2153.324: 99.2978% ( 3) 00:12:49.266 2153.324 - 2168.928: 99.3022% ( 4) 00:12:49.266 2168.928 - 2184.532: 99.3055% ( 3) 00:12:49.266 2184.532 - 2200.136: 99.3100% ( 4) 00:12:49.266 2200.136 - 2215.740: 99.3111% ( 1) 00:12:49.266 2215.740 - 2231.343: 99.3155% ( 4) 00:12:49.266 2231.343 - 2246.947: 99.3288% ( 12) 00:12:49.266 2246.947 - 2262.551: 99.3410% ( 11) 00:12:49.266 2262.551 - 2278.155: 99.3543% ( 12) 00:12:49.266 2278.155 - 2293.759: 99.3676% ( 12) 00:12:49.266 2293.759 - 2309.362: 99.3808% ( 12) 00:12:49.266 2309.362 - 2324.966: 99.3941% ( 12) 00:12:49.266 2324.966 - 2340.570: 99.4074% ( 12) 00:12:49.266 2340.570 - 2356.174: 99.4218% ( 13) 00:12:49.266 2356.174 - 2371.778: 99.4329% ( 10) 00:12:49.266 2496.608 - 2512.212: 99.4362% ( 3) 00:12:49.266 2512.212 - 2527.816: 99.4462% ( 9) 00:12:49.266 2527.816 - 2543.419: 99.4562% ( 9) 00:12:49.266 2543.419 - 2559.023: 99.4672% ( 10) 00:12:49.266 2559.023 - 2574.627: 99.4783% ( 10) 00:12:49.266 2574.627 - 2590.231: 99.4883% ( 9) 00:12:49.266 2590.231 - 2605.835: 99.5016% ( 12) 00:12:49.266 2605.835 - 2621.438: 99.5226% ( 19) 00:12:49.266 2621.438 - 2637.042: 99.5448% ( 20) 00:12:49.266 2637.042 - 2652.646: 99.5559% ( 10) 00:12:49.266 2652.646 - 2668.250: 99.5658% ( 9) 00:12:49.266 2668.250 - 2683.854: 99.5702% ( 4) 00:12:49.266 2683.854 - 2699.457: 99.5835% ( 12) 00:12:49.266 2699.457 - 2715.061: 99.5957% ( 11) 00:12:49.266 2715.061 - 2730.665: 99.5990% ( 3) 00:12:49.266 2777.476 - 2793.080: 99.6046% ( 5) 00:12:49.266 2793.080 - 2808.684: 99.6168% ( 11) 00:12:49.266 2808.684 - 2824.288: 99.6278% ( 10) 00:12:49.266 2824.288 - 2839.892: 99.6378% ( 9) 00:12:49.266 2839.892 - 2855.495: 99.6478% ( 9) 00:12:49.266 2855.495 - 2871.099: 99.6622% ( 13) 00:12:49.266 2871.099 - 2886.703: 99.6810% ( 17) 00:12:49.266 2886.703 - 2902.307: 99.7109% ( 27) 00:12:49.266 2902.307 - 2917.911: 99.7375% ( 24) 00:12:49.266 2917.911 - 2933.514: 99.7408% ( 3) 00:12:49.266 3105.156 - 3120.760: 99.7453% ( 4) 00:12:49.266 3120.760 - 3136.364: 99.7475% ( 2) 00:12:49.266 3136.364 - 3151.967: 99.7585% ( 10) 00:12:49.266 3151.967 - 3167.571: 99.7685% ( 9) 00:12:49.266 3167.571 - 3183.175: 99.7796% ( 10) 00:12:49.266 3183.175 - 3198.779: 99.7907% ( 10) 00:12:49.266 3198.779 - 3214.383: 99.8017% ( 10) 00:12:49.266 3214.383 - 3229.986: 99.8106% ( 8) 00:12:49.266 3229.986 - 3245.590: 99.8172% ( 6) 00:12:49.266 3245.590 - 3261.194: 99.8250% ( 7) 00:12:49.266 3261.194 - 3276.798: 99.8328% ( 7) 00:12:49.266 3276.798 - 3292.402: 99.8405% ( 7) 00:12:49.266 3292.402 - 3308.005: 99.8494% ( 8) 00:12:49.266 3308.005 - 3323.609: 99.8571% ( 7) 00:12:49.266 3323.609 - 3339.213: 99.8582% ( 1) 00:12:49.266 5118.046 - 5149.254: 99.8604% ( 2) 00:12:49.266 5149.254 - 5180.461: 99.8660% ( 5) 00:12:49.266 5180.461 - 5211.669: 99.8715% ( 5) 00:12:49.266 5211.669 - 5242.877: 99.8771% ( 5) 00:12:49.266 5242.877 - 5274.084: 99.8782% ( 1) 00:12:49.266 5398.915 - 5430.122: 99.8815% ( 3) 00:12:49.266 5430.122 - 5461.330: 99.8881% ( 6) 00:12:49.266 5461.330 - 5492.537: 99.8937% ( 5) 00:12:49.266 5492.537 - 5523.745: 99.8992% ( 5) 00:12:49.266 5523.745 - 5554.953: 99.9059% ( 6) 00:12:49.266 5554.953 - 5586.160: 99.9103% ( 4) 00:12:49.266 5586.160 - 5617.368: 99.9147% ( 4) 00:12:49.266 5617.368 - 5648.575: 99.9214% ( 6) 00:12:49.266 5648.575 - 5679.783: 99.9269% ( 5) 00:12:49.266 5679.783 - 5710.991: 99.9335% ( 6) 00:12:49.266 5710.991 - 5742.198: 99.9391% ( 5) 00:12:49.266 5742.198 - 5773.406: 99.9435% ( 4) 00:12:49.266 5773.406 - 5804.613: 99.9502% ( 6) 00:12:49.266 5804.613 - 5835.821: 99.9557% ( 5) 00:12:49.266 5835.821 - 5867.029: 99.9612% ( 5) 00:12:49.266 5867.029 - 5898.236: 99.9668% ( 5) 00:12:49.266 5898.236 - 5929.444: 99.9723% ( 5) 00:12:49.266 5929.444 - 5960.651: 99.9778% ( 5) 00:12:49.266 5960.651 - 5991.859: 99.9834% ( 5) 00:12:49.266 5991.859 - 6023.067: 99.9900% ( 6) 00:12:49.266 6023.067 - 6054.274: 99.9956% ( 5) 00:12:49.266 6054.274 - 6085.482: 99.9989% ( 3) 00:12:49.266 6085.482 - 6116.689: 100.0000% ( 1) 00:12:49.266 00:12:49.266 19:13:26 -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:12:49.834 EAL: TSC is not safe to use in SMP mode 00:12:49.834 EAL: TSC is not invariant 00:12:49.834 [2024-02-14 19:13:27.250727] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:51.206 Initializing NVMe Controllers 00:12:51.206 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:12:51.206 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:12:51.206 Initialization complete. Launching workers. 00:12:51.207 ======================================================== 00:12:51.207 Latency(us) 00:12:51.207 Device Information : IOPS MiB/s Average min max 00:12:51.207 PCIE (0000:00:06.0) NSID 1 from core 0: 82102.64 962.14 1559.20 698.50 9974.41 00:12:51.207 ======================================================== 00:12:51.207 Total : 82102.64 962.14 1559.20 698.50 9974.41 00:12:51.207 00:12:51.207 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:51.207 ================================================================================= 00:12:51.207 1.00000% : 1100.068us 00:12:51.207 10.00000% : 1341.927us 00:12:51.207 25.00000% : 1435.550us 00:12:51.207 50.00000% : 1529.172us 00:12:51.207 75.00000% : 1622.795us 00:12:51.207 90.00000% : 1778.833us 00:12:51.207 95.00000% : 1997.286us 00:12:51.207 98.00000% : 2231.343us 00:12:51.207 99.00000% : 2387.381us 00:12:51.207 99.50000% : 2605.835us 00:12:51.207 99.90000% : 4337.856us 00:12:51.207 99.99000% : 9299.864us 00:12:51.207 99.99900% : 9986.432us 00:12:51.207 99.99990% : 9986.432us 00:12:51.207 99.99999% : 9986.432us 00:12:51.207 00:12:51.207 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:12:51.207 ============================================================================== 00:12:51.207 Range in us Cumulative IO count 00:12:51.207 698.270 - 702.171: 0.0012% ( 1) 00:12:51.207 713.874 - 717.775: 0.0024% ( 1) 00:12:51.207 717.775 - 721.676: 0.0061% ( 3) 00:12:51.207 721.676 - 725.577: 0.0122% ( 5) 00:12:51.207 725.577 - 729.478: 0.0207% ( 7) 00:12:51.207 729.478 - 733.379: 0.0243% ( 3) 00:12:51.207 733.379 - 737.280: 0.0268% ( 2) 00:12:51.207 737.280 - 741.180: 0.0329% ( 5) 00:12:51.207 741.180 - 745.081: 0.0475% ( 12) 00:12:51.207 745.081 - 748.982: 0.0523% ( 4) 00:12:51.207 748.982 - 752.883: 0.0560% ( 3) 00:12:51.207 752.883 - 756.784: 0.0572% ( 1) 00:12:51.207 760.685 - 764.586: 0.0584% ( 1) 00:12:51.207 764.586 - 768.487: 0.0596% ( 1) 00:12:51.207 784.091 - 787.992: 0.0621% ( 2) 00:12:51.207 787.992 - 791.893: 0.0645% ( 2) 00:12:51.207 791.893 - 795.794: 0.0694% ( 4) 00:12:51.207 795.794 - 799.695: 0.0840% ( 12) 00:12:51.207 807.497 - 811.398: 0.0852% ( 1) 00:12:51.207 811.398 - 815.299: 0.0925% ( 6) 00:12:51.207 815.299 - 819.199: 0.0962% ( 3) 00:12:51.207 823.100 - 827.001: 0.0974% ( 1) 00:12:51.207 827.001 - 830.902: 0.1010% ( 3) 00:12:51.207 830.902 - 834.803: 0.1035% ( 2) 00:12:51.207 834.803 - 838.704: 0.1047% ( 1) 00:12:51.207 838.704 - 842.605: 0.1083% ( 3) 00:12:51.207 858.209 - 862.110: 0.1095% ( 1) 00:12:51.207 877.714 - 881.615: 0.1108% ( 1) 00:12:51.207 885.516 - 889.417: 0.1327% ( 18) 00:12:51.207 889.417 - 893.318: 0.1363% ( 3) 00:12:51.207 893.318 - 897.218: 0.1509% ( 12) 00:12:51.207 897.218 - 901.119: 0.1534% ( 2) 00:12:51.207 901.119 - 905.020: 0.1570% ( 3) 00:12:51.207 905.020 - 908.921: 0.1631% ( 5) 00:12:51.207 908.921 - 912.822: 0.1789% ( 13) 00:12:51.207 912.822 - 916.723: 0.1838% ( 4) 00:12:51.207 916.723 - 920.624: 0.1850% ( 1) 00:12:51.207 924.525 - 928.426: 0.1874% ( 2) 00:12:51.207 928.426 - 932.327: 0.1935% ( 5) 00:12:51.207 932.327 - 936.228: 0.2033% ( 8) 00:12:51.207 940.129 - 944.030: 0.2069% ( 3) 00:12:51.207 944.030 - 947.931: 0.2081% ( 1) 00:12:51.207 947.931 - 951.832: 0.2239% ( 13) 00:12:51.207 951.832 - 955.733: 0.2300% ( 5) 00:12:51.207 955.733 - 959.634: 0.2446% ( 12) 00:12:51.207 959.634 - 963.535: 0.2532% ( 7) 00:12:51.207 963.535 - 967.436: 0.2605% ( 6) 00:12:51.207 967.436 - 971.337: 0.2690% ( 7) 00:12:51.207 971.337 - 975.237: 0.2787% ( 8) 00:12:51.207 975.237 - 979.138: 0.3189% ( 33) 00:12:51.207 979.138 - 983.039: 0.3420% ( 19) 00:12:51.207 983.039 - 986.940: 0.3505% ( 7) 00:12:51.207 986.940 - 990.841: 0.3737% ( 19) 00:12:51.207 990.841 - 994.742: 0.3834% ( 8) 00:12:51.207 994.742 - 998.643: 0.3883% ( 4) 00:12:51.207 998.643 - 1006.445: 0.4150% ( 22) 00:12:51.207 1006.445 - 1014.247: 0.4418% ( 22) 00:12:51.466 1014.247 - 1022.049: 0.4710% ( 24) 00:12:51.466 1022.049 - 1029.851: 0.4844% ( 11) 00:12:51.466 1029.851 - 1037.653: 0.5185% ( 28) 00:12:51.466 1037.653 - 1045.455: 0.5465% ( 23) 00:12:51.466 1045.455 - 1053.256: 0.5733% ( 22) 00:12:51.466 1053.256 - 1061.058: 0.6378% ( 53) 00:12:51.466 1061.058 - 1068.860: 0.7120% ( 61) 00:12:51.466 1068.860 - 1076.662: 0.7631% ( 42) 00:12:51.466 1076.662 - 1084.464: 0.8276% ( 53) 00:12:51.466 1084.464 - 1092.266: 0.9469% ( 98) 00:12:51.466 1092.266 - 1100.068: 1.0382% ( 75) 00:12:51.466 1100.068 - 1107.870: 1.1806% ( 117) 00:12:51.466 1107.870 - 1115.672: 1.2914% ( 91) 00:12:51.466 1115.672 - 1123.474: 1.4082% ( 96) 00:12:51.466 1123.474 - 1131.275: 1.5299% ( 100) 00:12:51.466 1131.275 - 1139.077: 1.6054% ( 62) 00:12:51.466 1139.077 - 1146.879: 1.7015% ( 79) 00:12:51.466 1146.879 - 1154.681: 1.8707% ( 139) 00:12:51.466 1154.681 - 1162.483: 1.9583% ( 72) 00:12:51.466 1162.483 - 1170.285: 2.1117% ( 126) 00:12:51.466 1170.285 - 1178.087: 2.2991% ( 154) 00:12:51.466 1178.087 - 1185.889: 2.4464% ( 121) 00:12:51.466 1185.889 - 1193.691: 2.6716% ( 185) 00:12:51.466 1193.691 - 1201.493: 2.8225% ( 124) 00:12:51.466 1201.493 - 1209.294: 3.0111% ( 155) 00:12:51.466 1209.294 - 1217.096: 3.2545% ( 200) 00:12:51.466 1217.096 - 1224.898: 3.4821% ( 187) 00:12:51.466 1224.898 - 1232.700: 3.7329% ( 206) 00:12:51.466 1232.700 - 1240.502: 4.0578% ( 267) 00:12:51.466 1240.502 - 1248.304: 4.3719% ( 258) 00:12:51.466 1248.304 - 1256.106: 4.7358% ( 299) 00:12:51.466 1256.106 - 1263.908: 5.0729% ( 277) 00:12:51.466 1263.908 - 1271.710: 5.4405% ( 302) 00:12:51.466 1271.710 - 1279.512: 5.8835% ( 364) 00:12:51.466 1279.512 - 1287.313: 6.4008% ( 425) 00:12:51.466 1287.313 - 1295.115: 6.8231% ( 347) 00:12:51.466 1295.115 - 1302.917: 7.2953% ( 388) 00:12:51.466 1302.917 - 1310.719: 7.7883% ( 405) 00:12:51.466 1310.719 - 1318.521: 8.3603% ( 470) 00:12:51.466 1318.521 - 1326.323: 8.8800% ( 427) 00:12:51.466 1326.323 - 1334.125: 9.5056% ( 514) 00:12:51.466 1334.125 - 1341.927: 10.1129% ( 499) 00:12:51.466 1341.927 - 1349.729: 10.8201% ( 581) 00:12:51.466 1349.729 - 1357.531: 11.5236% ( 578) 00:12:51.466 1357.531 - 1365.332: 12.4620% ( 771) 00:12:51.466 1365.332 - 1373.134: 13.3103% ( 697) 00:12:51.466 1373.134 - 1380.936: 14.3594% ( 862) 00:12:51.466 1380.936 - 1388.738: 15.8236% ( 1203) 00:12:51.466 1388.738 - 1396.540: 17.1734% ( 1109) 00:12:51.466 1396.540 - 1404.342: 18.5475% ( 1129) 00:12:51.466 1404.342 - 1412.144: 20.1663% ( 1330) 00:12:51.466 1412.144 - 1419.946: 21.7047% ( 1264) 00:12:51.466 1419.946 - 1427.748: 23.4829% ( 1461) 00:12:51.466 1427.748 - 1435.550: 25.3463% ( 1531) 00:12:51.466 1435.550 - 1443.351: 27.1987% ( 1522) 00:12:51.466 1443.351 - 1451.153: 29.4248% ( 1829) 00:12:51.466 1451.153 - 1458.955: 31.1738% ( 1437) 00:12:51.466 1458.955 - 1466.757: 33.3354% ( 1776) 00:12:51.466 1466.757 - 1474.559: 35.4799% ( 1762) 00:12:51.466 1474.559 - 1482.361: 37.7060% ( 1829) 00:12:51.466 1482.361 - 1490.163: 39.9126% ( 1813) 00:12:51.466 1490.163 - 1497.965: 42.3359% ( 1991) 00:12:51.466 1497.965 - 1505.767: 44.7859% ( 2013) 00:12:51.466 1505.767 - 1513.569: 46.8964% ( 1734) 00:12:51.466 1513.569 - 1521.370: 49.2417% ( 1927) 00:12:51.466 1521.370 - 1529.172: 51.2232% ( 1628) 00:12:51.466 1529.172 - 1536.974: 53.4566% ( 1835) 00:12:51.466 1536.974 - 1544.776: 55.5086% ( 1686) 00:12:51.466 1544.776 - 1552.578: 57.3525% ( 1515) 00:12:51.466 1552.578 - 1560.380: 59.4594% ( 1731) 00:12:51.466 1560.380 - 1568.182: 61.5491% ( 1717) 00:12:51.466 1568.182 - 1575.984: 63.6121% ( 1695) 00:12:51.466 1575.984 - 1583.786: 65.8005% ( 1798) 00:12:51.466 1583.786 - 1591.588: 67.9864% ( 1796) 00:12:51.466 1591.588 - 1599.389: 70.0543% ( 1699) 00:12:51.466 1599.389 - 1607.191: 71.9676% ( 1572) 00:12:51.466 1607.191 - 1614.993: 73.7665% ( 1478) 00:12:51.466 1614.993 - 1622.795: 75.4850% ( 1412) 00:12:51.466 1622.795 - 1630.597: 77.3192% ( 1507) 00:12:51.466 1630.597 - 1638.399: 79.0609% ( 1431) 00:12:51.466 1638.399 - 1646.201: 80.3863% ( 1089) 00:12:51.466 1646.201 - 1654.003: 81.7215% ( 1097) 00:12:51.466 1654.003 - 1661.805: 82.7986% ( 885) 00:12:51.466 1661.805 - 1669.607: 83.8794% ( 888) 00:12:51.466 1669.607 - 1677.408: 84.8348% ( 785) 00:12:51.466 1677.408 - 1685.210: 85.5967% ( 626) 00:12:51.466 1685.210 - 1693.012: 86.2686% ( 552) 00:12:51.466 1693.012 - 1700.814: 86.9039% ( 522) 00:12:51.466 1700.814 - 1708.616: 87.3250% ( 346) 00:12:51.466 1708.616 - 1716.418: 87.7218% ( 326) 00:12:51.466 1716.418 - 1724.220: 88.0930% ( 305) 00:12:51.466 1724.220 - 1732.022: 88.4229% ( 271) 00:12:51.466 1732.022 - 1739.824: 88.7722% ( 287) 00:12:51.466 1739.824 - 1747.626: 89.1081% ( 276) 00:12:51.466 1747.626 - 1755.427: 89.4440% ( 276) 00:12:51.466 1755.427 - 1763.229: 89.7373% ( 241) 00:12:51.466 1763.229 - 1771.031: 89.9917% ( 209) 00:12:51.466 1771.031 - 1778.833: 90.2315% ( 197) 00:12:51.466 1778.833 - 1786.635: 90.4348% ( 167) 00:12:51.466 1786.635 - 1794.437: 90.5918% ( 129) 00:12:51.466 1794.437 - 1802.239: 90.7889% ( 162) 00:12:51.466 1802.239 - 1810.041: 91.0141% ( 185) 00:12:51.466 1810.041 - 1817.843: 91.1504% ( 112) 00:12:51.466 1817.843 - 1825.645: 91.3208% ( 140) 00:12:51.466 1825.645 - 1833.446: 91.4267% ( 87) 00:12:51.466 1833.446 - 1841.248: 91.5095% ( 68) 00:12:51.466 1841.248 - 1849.050: 91.6068% ( 80) 00:12:51.466 1849.050 - 1856.852: 91.7565% ( 123) 00:12:51.466 1856.852 - 1864.654: 91.8746% ( 97) 00:12:51.466 1864.654 - 1872.456: 92.0961% ( 182) 00:12:51.466 1872.456 - 1880.258: 92.2531% ( 129) 00:12:51.466 1880.258 - 1888.060: 92.3906% ( 113) 00:12:51.466 1888.060 - 1895.862: 92.6000% ( 172) 00:12:51.466 1895.862 - 1903.664: 92.7558% ( 128) 00:12:51.466 1903.664 - 1911.465: 92.8751% ( 98) 00:12:51.466 1911.465 - 1919.267: 93.0126% ( 113) 00:12:51.466 1919.267 - 1927.069: 93.2207% ( 171) 00:12:51.466 1927.069 - 1934.871: 93.3972% ( 145) 00:12:51.466 1934.871 - 1942.673: 93.6224% ( 185) 00:12:51.466 1942.673 - 1950.475: 93.8098% ( 154) 00:12:51.466 1950.475 - 1958.277: 94.0605% ( 206) 00:12:51.466 1958.277 - 1966.079: 94.2674% ( 170) 00:12:51.466 1966.079 - 1973.881: 94.5108% ( 200) 00:12:51.466 1973.881 - 1981.683: 94.7080% ( 162) 00:12:51.466 1981.683 - 1989.484: 94.9295% ( 182) 00:12:51.466 1989.484 - 1997.286: 95.0878% ( 130) 00:12:51.466 1997.286 - 2012.890: 95.3969% ( 254) 00:12:51.466 2012.890 - 2028.494: 95.7182% ( 264) 00:12:51.466 2028.494 - 2044.098: 96.0103% ( 240) 00:12:51.466 2044.098 - 2059.702: 96.2099% ( 164) 00:12:51.466 2059.702 - 2075.305: 96.4339% ( 184) 00:12:51.466 2075.305 - 2090.909: 96.6018% ( 138) 00:12:51.466 2090.909 - 2106.513: 96.7674% ( 136) 00:12:51.466 2106.513 - 2122.117: 96.9378% ( 140) 00:12:51.466 2122.117 - 2137.721: 97.1301% ( 158) 00:12:51.466 2137.721 - 2153.324: 97.3199% ( 156) 00:12:51.466 2153.324 - 2168.928: 97.5195% ( 164) 00:12:51.466 2168.928 - 2184.532: 97.6485% ( 106) 00:12:51.466 2184.532 - 2200.136: 97.7909% ( 117) 00:12:51.466 2200.136 - 2215.740: 97.9845% ( 159) 00:12:51.466 2215.740 - 2231.343: 98.1634% ( 147) 00:12:51.466 2231.343 - 2246.947: 98.3094% ( 120) 00:12:51.466 2246.947 - 2262.551: 98.4567% ( 121) 00:12:51.466 2262.551 - 2278.155: 98.6222% ( 136) 00:12:51.466 2278.155 - 2293.759: 98.7208% ( 81) 00:12:51.466 2293.759 - 2309.362: 98.7890% ( 56) 00:12:51.466 2309.362 - 2324.966: 98.8657% ( 63) 00:12:51.466 2324.966 - 2340.570: 98.9070% ( 34) 00:12:51.466 2340.570 - 2356.174: 98.9399% ( 27) 00:12:51.466 2356.174 - 2371.778: 98.9679% ( 23) 00:12:51.466 2371.778 - 2387.381: 99.0275% ( 49) 00:12:51.466 2387.381 - 2402.985: 99.0494% ( 18) 00:12:51.466 2402.985 - 2418.589: 99.0653% ( 13) 00:12:51.466 2418.589 - 2434.193: 99.1760% ( 91) 00:12:51.466 2434.193 - 2449.797: 99.2235% ( 39) 00:12:51.466 2449.797 - 2465.400: 99.2710% ( 39) 00:12:51.466 2465.400 - 2481.004: 99.3099% ( 32) 00:12:51.466 2481.004 - 2496.608: 99.3671% ( 47) 00:12:51.466 2496.608 - 2512.212: 99.4280% ( 50) 00:12:51.466 2512.212 - 2527.816: 99.4389% ( 9) 00:12:51.466 2527.816 - 2543.419: 99.4438% ( 4) 00:12:51.466 2543.419 - 2559.023: 99.4535% ( 8) 00:12:51.466 2559.023 - 2574.627: 99.4633% ( 8) 00:12:51.466 2574.627 - 2590.231: 99.4791% ( 13) 00:12:51.466 2590.231 - 2605.835: 99.5034% ( 20) 00:12:51.466 2605.835 - 2621.438: 99.5217% ( 15) 00:12:51.466 2621.438 - 2637.042: 99.5253% ( 3) 00:12:51.466 2637.042 - 2652.646: 99.5278% ( 2) 00:12:51.466 2652.646 - 2668.250: 99.5302% ( 2) 00:12:51.467 2668.250 - 2683.854: 99.5387% ( 7) 00:12:51.467 2683.854 - 2699.457: 99.5424% ( 3) 00:12:51.467 2746.269 - 2761.873: 99.5436% ( 1) 00:12:51.467 2808.684 - 2824.288: 99.5448% ( 1) 00:12:51.467 3027.137 - 3042.741: 99.5460% ( 1) 00:12:51.467 3042.741 - 3058.345: 99.5472% ( 1) 00:12:51.467 3073.948 - 3089.552: 99.5558% ( 7) 00:12:51.467 3089.552 - 3105.156: 99.5679% ( 10) 00:12:51.467 3105.156 - 3120.760: 99.5813% ( 11) 00:12:51.467 3120.760 - 3136.364: 99.5935% ( 10) 00:12:51.467 3136.364 - 3151.967: 99.6117% ( 15) 00:12:51.467 3151.967 - 3167.571: 99.6276% ( 13) 00:12:51.467 3167.571 - 3183.175: 99.6397% ( 10) 00:12:51.467 3183.175 - 3198.779: 99.6470% ( 6) 00:12:51.467 3198.779 - 3214.383: 99.6543% ( 6) 00:12:51.467 3214.383 - 3229.986: 99.6616% ( 6) 00:12:51.467 3229.986 - 3245.590: 99.6702% ( 7) 00:12:51.467 3245.590 - 3261.194: 99.6714% ( 1) 00:12:51.467 3339.213 - 3354.817: 99.6762% ( 4) 00:12:51.467 3354.817 - 3370.421: 99.6872% ( 9) 00:12:51.467 3370.421 - 3386.024: 99.6884% ( 1) 00:12:51.467 3479.647 - 3495.251: 99.6896% ( 1) 00:12:51.467 3495.251 - 3510.855: 99.6909% ( 1) 00:12:51.467 3588.874 - 3604.478: 99.6921% ( 1) 00:12:51.467 3620.081 - 3635.685: 99.6933% ( 1) 00:12:51.467 3666.893 - 3682.497: 99.6969% ( 3) 00:12:51.467 3682.497 - 3698.100: 99.7091% ( 10) 00:12:51.467 3698.100 - 3713.704: 99.7176% ( 7) 00:12:51.467 3713.704 - 3729.308: 99.7286% ( 9) 00:12:51.467 3729.308 - 3744.912: 99.7383% ( 8) 00:12:51.467 3744.912 - 3760.516: 99.7517% ( 11) 00:12:51.467 3760.516 - 3776.119: 99.7627% ( 9) 00:12:51.467 3776.119 - 3791.723: 99.7736% ( 9) 00:12:51.467 3791.723 - 3807.327: 99.7785% ( 4) 00:12:51.467 3807.327 - 3822.931: 99.7834% ( 4) 00:12:51.467 3822.931 - 3838.535: 99.7858% ( 2) 00:12:51.467 3838.535 - 3854.138: 99.7882% ( 2) 00:12:51.467 3854.138 - 3869.742: 99.7919% ( 3) 00:12:51.467 3869.742 - 3885.346: 99.7980% ( 5) 00:12:51.467 3885.346 - 3900.950: 99.8016% ( 3) 00:12:51.467 3900.950 - 3916.554: 99.8077% ( 5) 00:12:51.467 3916.554 - 3932.157: 99.8126% ( 4) 00:12:51.467 3932.157 - 3947.761: 99.8150% ( 2) 00:12:51.467 3947.761 - 3963.365: 99.8211% ( 5) 00:12:51.467 3963.365 - 3978.969: 99.8247% ( 3) 00:12:51.467 3978.969 - 3994.573: 99.8284% ( 3) 00:12:51.467 3994.573 - 4025.780: 99.8369% ( 7) 00:12:51.467 4025.780 - 4056.988: 99.8466% ( 8) 00:12:51.467 4056.988 - 4088.195: 99.8527% ( 5) 00:12:51.467 4088.195 - 4119.403: 99.8588% ( 5) 00:12:51.467 4119.403 - 4150.611: 99.8661% ( 6) 00:12:51.467 4150.611 - 4181.818: 99.8722% ( 5) 00:12:51.467 4181.818 - 4213.026: 99.8759% ( 3) 00:12:51.467 4213.026 - 4244.233: 99.8819% ( 5) 00:12:51.467 4244.233 - 4275.441: 99.8880% ( 5) 00:12:51.467 4275.441 - 4306.649: 99.8953% ( 6) 00:12:51.467 4306.649 - 4337.856: 99.9026% ( 6) 00:12:51.467 4337.856 - 4369.064: 99.9075% ( 4) 00:12:51.467 4369.064 - 4400.271: 99.9099% ( 2) 00:12:51.467 4556.309 - 4587.517: 99.9112% ( 1) 00:12:51.467 4587.517 - 4618.725: 99.9124% ( 1) 00:12:51.467 4930.801 - 4962.008: 99.9136% ( 1) 00:12:51.467 5086.839 - 5118.046: 99.9148% ( 1) 00:12:51.467 5242.877 - 5274.084: 99.9160% ( 1) 00:12:51.467 5274.084 - 5305.292: 99.9172% ( 1) 00:12:51.467 5398.915 - 5430.122: 99.9185% ( 1) 00:12:51.467 5898.236 - 5929.444: 99.9197% ( 1) 00:12:51.467 5991.859 - 6023.067: 99.9209% ( 1) 00:12:51.467 6147.897 - 6179.105: 99.9221% ( 1) 00:12:51.467 6241.520 - 6272.727: 99.9233% ( 1) 00:12:51.467 6303.935 - 6335.143: 99.9245% ( 1) 00:12:51.467 6491.181 - 6522.388: 99.9258% ( 1) 00:12:51.467 6616.011 - 6647.219: 99.9270% ( 1) 00:12:51.467 6709.634 - 6740.841: 99.9282% ( 1) 00:12:51.467 6740.841 - 6772.049: 99.9294% ( 1) 00:12:51.467 6803.257 - 6834.464: 99.9306% ( 1) 00:12:51.467 6896.879 - 6928.087: 99.9367% ( 5) 00:12:51.467 6928.087 - 6959.295: 99.9391% ( 2) 00:12:51.467 7021.710 - 7052.917: 99.9404% ( 1) 00:12:51.467 7084.125 - 7115.333: 99.9452% ( 4) 00:12:51.467 7115.333 - 7146.540: 99.9477% ( 2) 00:12:51.467 7146.540 - 7177.748: 99.9489% ( 1) 00:12:51.467 7271.371 - 7302.578: 99.9550% ( 5) 00:12:51.467 7302.578 - 7333.786: 99.9611% ( 5) 00:12:51.467 7521.031 - 7552.239: 99.9647% ( 3) 00:12:51.467 7583.447 - 7614.654: 99.9659% ( 1) 00:12:51.467 7677.069 - 7708.277: 99.9671% ( 1) 00:12:51.467 7708.277 - 7739.485: 99.9696% ( 2) 00:12:51.467 7739.485 - 7770.692: 99.9708% ( 1) 00:12:51.467 7770.692 - 7801.900: 99.9732% ( 2) 00:12:51.467 7801.900 - 7833.107: 99.9757% ( 2) 00:12:51.467 7833.107 - 7864.315: 99.9769% ( 1) 00:12:51.725 7864.315 - 7895.523: 99.9817% ( 4) 00:12:51.725 8176.391 - 8238.806: 99.9830% ( 1) 00:12:51.726 8550.882 - 8613.297: 99.9854% ( 2) 00:12:51.726 8675.713 - 8738.128: 99.9866% ( 1) 00:12:51.726 8800.543 - 8862.958: 99.9878% ( 1) 00:12:51.726 9050.204 - 9112.619: 99.9890% ( 1) 00:12:51.726 9237.449 - 9299.864: 99.9903% ( 1) 00:12:51.726 9299.864 - 9362.280: 99.9927% ( 2) 00:12:51.726 9362.280 - 9424.695: 99.9951% ( 2) 00:12:51.726 9611.940 - 9674.356: 99.9963% ( 1) 00:12:51.726 9736.771 - 9799.186: 99.9976% ( 1) 00:12:51.726 9799.186 - 9861.601: 99.9988% ( 1) 00:12:51.726 9924.016 - 9986.432: 100.0000% ( 1) 00:12:51.726 00:12:51.726 19:13:28 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:12:51.726 00:12:51.726 real 0m4.291s 00:12:51.726 user 0m2.645s 00:12:51.726 sys 0m1.645s 00:12:51.726 19:13:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.726 19:13:28 -- common/autotest_common.sh@10 -- # set +x 00:12:51.726 ************************************ 00:12:51.726 END TEST nvme_perf 00:12:51.726 ************************************ 00:12:51.726 19:13:28 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:51.726 19:13:28 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:12:51.726 19:13:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:51.726 19:13:28 -- common/autotest_common.sh@10 -- # set +x 00:12:51.726 ************************************ 00:12:51.726 START TEST nvme_hello_world 00:12:51.726 ************************************ 00:12:51.726 19:13:28 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:12:52.663 EAL: TSC is not safe to use in SMP mode 00:12:52.663 EAL: TSC is not invariant 00:12:52.663 [2024-02-14 19:13:29.736360] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:52.663 Initializing NVMe Controllers 00:12:52.663 Attaching to 0000:00:06.0 00:12:52.663 Attached to 0000:00:06.0 00:12:52.663 Namespace ID: 1 size: 5GB 00:12:52.663 Initialization complete. 00:12:52.663 INFO: using host memory buffer for IO 00:12:52.663 Hello world! 00:12:52.663 00:12:52.663 real 0m0.833s 00:12:52.663 user 0m0.022s 00:12:52.663 sys 0m0.811s 00:12:52.663 19:13:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:52.663 ************************************ 00:12:52.663 END TEST nvme_hello_world 00:12:52.663 19:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:52.663 ************************************ 00:12:52.663 19:13:29 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:52.663 19:13:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:52.663 19:13:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:52.663 19:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:52.663 ************************************ 00:12:52.663 START TEST nvme_sgl 00:12:52.663 ************************************ 00:12:52.663 19:13:29 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:12:53.231 EAL: TSC is not safe to use in SMP mode 00:12:53.231 EAL: TSC is not invariant 00:12:53.231 [2024-02-14 19:13:30.623762] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:53.231 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:12:53.231 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:12:53.231 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:12:53.231 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:12:53.231 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:12:53.231 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:12:53.490 NVMe Readv/Writev Request test 00:12:53.490 Attaching to 0000:00:06.0 00:12:53.490 Attached to 0000:00:06.0 00:12:53.490 0000:00:06.0: build_io_request_2 test passed 00:12:53.490 0000:00:06.0: build_io_request_4 test passed 00:12:53.490 0000:00:06.0: build_io_request_5 test passed 00:12:53.490 0000:00:06.0: build_io_request_6 test passed 00:12:53.490 0000:00:06.0: build_io_request_7 test passed 00:12:53.490 0000:00:06.0: build_io_request_10 test passed 00:12:53.490 Cleaning up... 00:12:53.490 00:12:53.490 real 0m0.847s 00:12:53.490 user 0m0.017s 00:12:53.490 sys 0m0.830s 00:12:53.490 19:13:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.490 19:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:53.490 ************************************ 00:12:53.490 END TEST nvme_sgl 00:12:53.490 ************************************ 00:12:53.490 19:13:30 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:53.490 19:13:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:53.490 19:13:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:53.490 19:13:30 -- common/autotest_common.sh@10 -- # set +x 00:12:53.490 ************************************ 00:12:53.490 START TEST nvme_e2edp 00:12:53.490 ************************************ 00:12:53.490 19:13:30 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:12:54.428 EAL: TSC is not safe to use in SMP mode 00:12:54.428 EAL: TSC is not invariant 00:12:54.428 [2024-02-14 19:13:31.498204] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:54.428 NVMe Write/Read with End-to-End data protection test 00:12:54.428 Attaching to 0000:00:06.0 00:12:54.428 Attached to 0000:00:06.0 00:12:54.428 Cleaning up... 00:12:54.428 00:12:54.428 real 0m0.821s 00:12:54.428 user 0m0.016s 00:12:54.428 sys 0m0.804s 00:12:54.428 19:13:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:54.428 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.428 ************************************ 00:12:54.428 END TEST nvme_e2edp 00:12:54.428 ************************************ 00:12:54.428 19:13:31 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:54.428 19:13:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:54.428 19:13:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:54.428 19:13:31 -- common/autotest_common.sh@10 -- # set +x 00:12:54.428 ************************************ 00:12:54.428 START TEST nvme_reserve 00:12:54.428 ************************************ 00:12:54.428 19:13:31 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:12:54.996 EAL: TSC is not safe to use in SMP mode 00:12:54.996 EAL: TSC is not invariant 00:12:54.996 [2024-02-14 19:13:32.362679] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:55.255 ===================================================== 00:12:55.255 NVMe Controller at PCI bus 0, device 6, function 0 00:12:55.255 ===================================================== 00:12:55.255 Reservations: Not Supported 00:12:55.255 Reservation test passed 00:12:55.255 00:12:55.255 real 0m0.821s 00:12:55.255 user 0m0.024s 00:12:55.255 sys 0m0.796s 00:12:55.255 19:13:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.255 19:13:32 -- common/autotest_common.sh@10 -- # set +x 00:12:55.255 ************************************ 00:12:55.255 END TEST nvme_reserve 00:12:55.255 ************************************ 00:12:55.255 19:13:32 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:55.255 19:13:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:12:55.255 19:13:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:55.255 19:13:32 -- common/autotest_common.sh@10 -- # set +x 00:12:55.256 ************************************ 00:12:55.256 START TEST nvme_err_injection 00:12:55.256 ************************************ 00:12:55.256 19:13:32 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:12:55.823 EAL: TSC is not safe to use in SMP mode 00:12:55.823 EAL: TSC is not invariant 00:12:55.823 [2024-02-14 19:13:33.228885] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:56.081 NVMe Error Injection test 00:12:56.081 Attaching to 0000:00:06.0 00:12:56.081 Attached to 0000:00:06.0 00:12:56.081 0000:00:06.0: get features failed as expected 00:12:56.081 0000:00:06.0: get features successfully as expected 00:12:56.081 0000:00:06.0: read failed as expected 00:12:56.081 0000:00:06.0: read successfully as expected 00:12:56.081 Cleaning up... 00:12:56.081 00:12:56.081 real 0m0.834s 00:12:56.081 user 0m0.008s 00:12:56.081 sys 0m0.825s 00:12:56.081 19:13:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.081 19:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:56.081 ************************************ 00:12:56.081 END TEST nvme_err_injection 00:12:56.081 ************************************ 00:12:56.081 19:13:33 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:56.081 19:13:33 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:12:56.081 19:13:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:56.081 19:13:33 -- common/autotest_common.sh@10 -- # set +x 00:12:56.081 ************************************ 00:12:56.081 START TEST nvme_overhead 00:12:56.081 ************************************ 00:12:56.081 19:13:33 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:12:57.015 EAL: TSC is not safe to use in SMP mode 00:12:57.016 EAL: TSC is not invariant 00:12:57.016 [2024-02-14 19:13:34.093578] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:57.950 Initializing NVMe Controllers 00:12:57.950 Attaching to 0000:00:06.0 00:12:57.950 Attached to 0000:00:06.0 00:12:57.950 Initialization complete. Launching workers. 00:12:57.950 submit (in ns) avg, min, max = 10068.0, 8580.9, 27050.5 00:12:57.950 complete (in ns) avg, min, max = 7037.9, 5821.9, 383503.6 00:12:57.950 00:12:57.950 Submit histogram 00:12:57.950 ================ 00:12:57.950 Range in us Cumulative Count 00:12:57.950 8.533 - 8.594: 0.0318% ( 3) 00:12:57.950 8.594 - 8.655: 0.1802% ( 14) 00:12:57.950 8.655 - 8.716: 1.1767% ( 94) 00:12:57.950 8.716 - 8.777: 4.4737% ( 311) 00:12:57.950 8.777 - 8.838: 11.2477% ( 639) 00:12:57.950 8.838 - 8.899: 18.7215% ( 705) 00:12:57.950 8.899 - 8.960: 25.3048% ( 621) 00:12:57.950 8.960 - 9.021: 29.9693% ( 440) 00:12:57.950 9.021 - 9.082: 32.4817% ( 237) 00:12:57.950 9.082 - 9.143: 33.7644% ( 121) 00:12:57.950 9.143 - 9.204: 34.6549% ( 84) 00:12:57.950 9.204 - 9.265: 35.1638% ( 48) 00:12:57.950 9.265 - 9.326: 35.8741% ( 67) 00:12:57.950 9.326 - 9.387: 37.7398% ( 176) 00:12:57.950 9.387 - 9.448: 43.2524% ( 520) 00:12:57.950 9.448 - 9.509: 52.4117% ( 864) 00:12:57.950 9.509 - 9.570: 62.1223% ( 916) 00:12:57.950 9.570 - 9.630: 70.0519% ( 748) 00:12:57.950 9.630 - 9.691: 74.8118% ( 449) 00:12:57.950 9.691 - 9.752: 77.3031% ( 235) 00:12:57.950 9.752 - 9.813: 78.8084% ( 142) 00:12:57.950 9.813 - 9.874: 79.9110% ( 104) 00:12:57.950 9.874 - 9.935: 80.9393% ( 97) 00:12:57.950 9.935 - 9.996: 81.9358% ( 94) 00:12:57.950 9.996 - 10.057: 82.9111% ( 92) 00:12:57.950 10.057 - 10.118: 83.6319% ( 68) 00:12:57.950 10.118 - 10.179: 84.0984% ( 44) 00:12:57.950 10.179 - 10.240: 84.3528% ( 24) 00:12:57.950 10.240 - 10.301: 84.4800% ( 12) 00:12:57.950 10.301 - 10.362: 84.5330% ( 5) 00:12:57.950 10.362 - 10.423: 84.5754% ( 4) 00:12:57.950 10.423 - 10.484: 84.6602% ( 8) 00:12:57.950 10.484 - 10.545: 84.7132% ( 5) 00:12:57.950 10.606 - 10.667: 84.7238% ( 1) 00:12:57.950 10.728 - 10.789: 84.7344% ( 1) 00:12:57.950 10.850 - 10.910: 84.7450% ( 1) 00:12:57.950 10.971 - 11.032: 84.7556% ( 1) 00:12:57.950 11.032 - 11.093: 84.7768% ( 2) 00:12:57.950 11.154 - 11.215: 84.7874% ( 1) 00:12:57.950 11.276 - 11.337: 84.7980% ( 1) 00:12:57.950 11.337 - 11.398: 84.8087% ( 1) 00:12:57.950 11.642 - 11.703: 84.8299% ( 2) 00:12:57.950 11.764 - 11.825: 84.8617% ( 3) 00:12:57.950 11.947 - 12.008: 84.8723% ( 1) 00:12:57.950 12.008 - 12.069: 84.8829% ( 1) 00:12:57.950 12.069 - 12.130: 84.8935% ( 1) 00:12:57.950 12.251 - 12.312: 84.9041% ( 1) 00:12:57.951 12.312 - 12.373: 84.9147% ( 1) 00:12:57.951 12.434 - 12.495: 84.9253% ( 1) 00:12:57.951 12.495 - 12.556: 84.9465% ( 2) 00:12:57.951 12.556 - 12.617: 84.9571% ( 1) 00:12:57.951 12.617 - 12.678: 84.9677% ( 1) 00:12:57.951 12.739 - 12.800: 84.9889% ( 2) 00:12:57.951 12.861 - 12.922: 85.0207% ( 3) 00:12:57.951 12.922 - 12.983: 85.0313% ( 1) 00:12:57.951 12.983 - 13.044: 85.1161% ( 8) 00:12:57.951 13.044 - 13.105: 85.7309% ( 58) 00:12:57.951 13.105 - 13.166: 87.0667% ( 126) 00:12:57.951 13.166 - 13.227: 89.2823% ( 209) 00:12:57.951 13.227 - 13.288: 91.8054% ( 238) 00:12:57.951 13.288 - 13.349: 94.1164% ( 218) 00:12:57.951 13.349 - 13.410: 95.3779% ( 119) 00:12:57.951 13.410 - 13.470: 96.0988% ( 68) 00:12:57.951 13.470 - 13.531: 96.4698% ( 35) 00:12:57.951 13.531 - 13.592: 96.7243% ( 24) 00:12:57.951 13.592 - 13.653: 96.8303% ( 10) 00:12:57.951 13.653 - 13.714: 96.8833% ( 5) 00:12:57.951 13.714 - 13.775: 96.9469% ( 6) 00:12:57.951 13.836 - 13.897: 96.9893% ( 4) 00:12:57.951 13.897 - 13.958: 97.1165% ( 12) 00:12:57.951 13.958 - 14.019: 97.2225% ( 10) 00:12:57.951 14.019 - 14.080: 97.4133% ( 18) 00:12:57.951 14.080 - 14.141: 97.5618% ( 14) 00:12:57.951 14.141 - 14.202: 97.7208% ( 15) 00:12:57.951 14.202 - 14.263: 97.8056% ( 8) 00:12:57.951 14.263 - 14.324: 97.8268% ( 2) 00:12:57.951 14.324 - 14.385: 97.8692% ( 4) 00:12:57.951 14.385 - 14.446: 97.9222% ( 5) 00:12:57.951 14.446 - 14.507: 97.9540% ( 3) 00:12:57.951 14.568 - 14.629: 97.9752% ( 2) 00:12:57.951 14.629 - 14.690: 97.9858% ( 1) 00:12:57.951 14.690 - 14.750: 97.9964% ( 1) 00:12:57.951 14.750 - 14.811: 98.0070% ( 1) 00:12:57.951 14.811 - 14.872: 98.0176% ( 1) 00:12:57.951 14.872 - 14.933: 98.0282% ( 1) 00:12:57.951 14.933 - 14.994: 98.0494% ( 2) 00:12:57.951 14.994 - 15.055: 98.0706% ( 2) 00:12:57.951 15.116 - 15.177: 98.0812% ( 1) 00:12:57.951 15.177 - 15.238: 98.1024% ( 2) 00:12:57.951 15.238 - 15.299: 98.1448% ( 4) 00:12:57.951 15.299 - 15.360: 98.1660% ( 2) 00:12:57.951 15.360 - 15.421: 98.1872% ( 2) 00:12:57.951 15.543 - 15.604: 98.1978% ( 1) 00:12:57.951 15.604 - 15.726: 98.2190% ( 2) 00:12:57.951 15.848 - 15.970: 98.2508% ( 3) 00:12:57.951 16.091 - 16.213: 98.2614% ( 1) 00:12:57.951 16.213 - 16.335: 98.2720% ( 1) 00:12:57.951 16.457 - 16.579: 98.2826% ( 1) 00:12:57.951 16.579 - 16.701: 98.2932% ( 1) 00:12:57.951 16.701 - 16.823: 98.3038% ( 1) 00:12:57.951 16.823 - 16.945: 98.3144% ( 1) 00:12:57.951 17.067 - 17.189: 98.3250% ( 1) 00:12:57.951 17.310 - 17.432: 98.3462% ( 2) 00:12:57.951 17.432 - 17.554: 98.3674% ( 2) 00:12:57.951 17.554 - 17.676: 98.3992% ( 3) 00:12:57.951 17.676 - 17.798: 98.4098% ( 1) 00:12:57.951 17.920 - 18.042: 98.4204% ( 1) 00:12:57.951 18.042 - 18.164: 98.4310% ( 1) 00:12:57.951 18.286 - 18.408: 98.4416% ( 1) 00:12:57.951 18.408 - 18.530: 98.4522% ( 1) 00:12:57.951 18.773 - 18.895: 98.4840% ( 3) 00:12:57.951 19.017 - 19.139: 98.4946% ( 1) 00:12:57.951 19.139 - 19.261: 98.5264% ( 3) 00:12:57.951 19.505 - 19.627: 98.5371% ( 1) 00:12:57.951 20.358 - 20.480: 98.5477% ( 1) 00:12:57.951 20.602 - 20.724: 98.5689% ( 2) 00:12:57.951 20.724 - 20.846: 98.6113% ( 4) 00:12:57.951 20.846 - 20.968: 98.6431% ( 3) 00:12:57.951 20.968 - 21.090: 98.6643% ( 2) 00:12:57.951 21.090 - 21.211: 98.7279% ( 6) 00:12:57.951 21.211 - 21.333: 98.9187% ( 18) 00:12:57.951 21.333 - 21.455: 99.1625% ( 23) 00:12:57.951 21.455 - 21.577: 99.3215% ( 15) 00:12:57.951 21.577 - 21.699: 99.4911% ( 16) 00:12:57.951 21.699 - 21.821: 99.5972% ( 10) 00:12:57.951 21.821 - 21.943: 99.6714% ( 7) 00:12:57.951 21.943 - 22.065: 99.7138% ( 4) 00:12:57.951 22.065 - 22.187: 99.7244% ( 1) 00:12:57.951 22.187 - 22.309: 99.7456% ( 2) 00:12:57.951 24.503 - 24.625: 99.7668% ( 2) 00:12:57.951 25.112 - 25.234: 99.8092% ( 4) 00:12:57.951 25.234 - 25.356: 99.8304% ( 2) 00:12:57.951 25.356 - 25.478: 99.8834% ( 5) 00:12:57.951 25.478 - 25.600: 99.9046% ( 2) 00:12:57.951 25.600 - 25.722: 99.9258% ( 2) 00:12:57.951 25.722 - 25.844: 99.9470% ( 2) 00:12:57.951 25.844 - 25.966: 99.9788% ( 3) 00:12:57.951 26.210 - 26.331: 99.9894% ( 1) 00:12:57.951 26.941 - 27.063: 100.0000% ( 1) 00:12:57.951 00:12:57.951 Complete histogram 00:12:57.951 ================== 00:12:57.951 Range in us Cumulative Count 00:12:57.951 5.821 - 5.851: 0.4134% ( 39) 00:12:57.951 5.851 - 5.882: 2.6291% ( 209) 00:12:57.951 5.882 - 5.912: 6.2016% ( 337) 00:12:57.951 5.912 - 5.943: 10.0392% ( 362) 00:12:57.951 5.943 - 5.973: 14.0570% ( 379) 00:12:57.951 5.973 - 6.004: 17.3328% ( 309) 00:12:57.951 6.004 - 6.034: 20.3859% ( 288) 00:12:57.951 6.034 - 6.065: 22.4637% ( 196) 00:12:57.951 6.065 - 6.095: 23.9266% ( 138) 00:12:57.951 6.095 - 6.126: 25.0186% ( 103) 00:12:57.951 6.126 - 6.156: 25.8560% ( 79) 00:12:57.951 6.156 - 6.187: 26.6405% ( 74) 00:12:57.951 6.187 - 6.217: 28.3367% ( 160) 00:12:57.951 6.217 - 6.248: 30.1389% ( 170) 00:12:57.951 6.248 - 6.278: 31.2944% ( 109) 00:12:57.951 6.278 - 6.309: 32.0895% ( 75) 00:12:57.951 6.309 - 6.339: 32.8846% ( 75) 00:12:57.951 6.339 - 6.370: 34.4005% ( 143) 00:12:57.951 6.370 - 6.400: 40.1463% ( 542) 00:12:57.951 6.400 - 6.430: 47.1748% ( 663) 00:12:57.951 6.430 - 6.461: 54.1821% ( 661) 00:12:57.951 6.461 - 6.491: 58.6452% ( 421) 00:12:57.951 6.491 - 6.522: 62.0375% ( 320) 00:12:57.951 6.522 - 6.552: 64.4016% ( 223) 00:12:57.951 6.552 - 6.583: 66.4688% ( 195) 00:12:57.951 6.583 - 6.613: 68.0059% ( 145) 00:12:57.951 6.613 - 6.644: 68.9494% ( 89) 00:12:57.951 6.644 - 6.674: 69.6067% ( 62) 00:12:57.951 6.674 - 6.705: 70.2746% ( 63) 00:12:57.951 6.705 - 6.735: 71.0909% ( 77) 00:12:57.951 6.735 - 6.766: 72.9036% ( 171) 00:12:57.951 6.766 - 6.796: 75.2359% ( 220) 00:12:57.951 6.796 - 6.827: 77.3455% ( 199) 00:12:57.951 6.827 - 6.857: 78.7978% ( 137) 00:12:57.951 6.857 - 6.888: 79.9958% ( 113) 00:12:57.951 6.888 - 6.918: 80.8756% ( 83) 00:12:57.951 6.918 - 6.949: 81.4693% ( 56) 00:12:57.951 6.949 - 6.979: 82.0100% ( 51) 00:12:57.951 6.979 - 7.010: 82.3704% ( 34) 00:12:57.951 7.010 - 7.040: 82.6566% ( 27) 00:12:57.951 7.040 - 7.070: 83.0171% ( 34) 00:12:57.951 7.070 - 7.101: 83.2079% ( 18) 00:12:57.951 7.101 - 7.131: 83.3881% ( 17) 00:12:57.951 7.131 - 7.162: 83.5789% ( 18) 00:12:57.951 7.162 - 7.192: 83.7803% ( 19) 00:12:57.951 7.192 - 7.223: 83.9924% ( 20) 00:12:57.951 7.223 - 7.253: 84.2574% ( 25) 00:12:57.951 7.253 - 7.284: 84.3740% ( 11) 00:12:57.951 7.284 - 7.314: 84.4588% ( 8) 00:12:57.951 7.314 - 7.345: 84.5436% ( 8) 00:12:57.951 7.345 - 7.375: 84.6178% ( 7) 00:12:57.951 7.375 - 7.406: 84.7132% ( 9) 00:12:57.951 7.406 - 7.436: 84.7450% ( 3) 00:12:57.951 7.436 - 7.467: 84.7768% ( 3) 00:12:57.951 7.467 - 7.497: 84.8299% ( 5) 00:12:57.951 7.497 - 7.528: 84.8405% ( 1) 00:12:57.951 7.528 - 7.558: 84.8511% ( 1) 00:12:57.951 7.589 - 7.619: 84.8617% ( 1) 00:12:57.951 7.710 - 7.741: 84.8723% ( 1) 00:12:57.951 7.741 - 7.771: 84.8829% ( 1) 00:12:57.951 7.863 - 7.924: 84.8935% ( 1) 00:12:57.951 7.985 - 8.046: 84.9041% ( 1) 00:12:57.951 8.046 - 8.107: 84.9147% ( 1) 00:12:57.951 8.107 - 8.168: 84.9253% ( 1) 00:12:57.951 8.168 - 8.229: 84.9359% ( 1) 00:12:57.951 8.229 - 8.290: 84.9571% ( 2) 00:12:57.951 8.594 - 8.655: 84.9783% ( 2) 00:12:57.951 8.655 - 8.716: 84.9889% ( 1) 00:12:57.951 8.777 - 8.838: 84.9995% ( 1) 00:12:57.951 8.838 - 8.899: 85.0101% ( 1) 00:12:57.951 8.899 - 8.960: 85.0207% ( 1) 00:12:57.951 9.204 - 9.265: 85.2857% ( 25) 00:12:57.951 9.265 - 9.326: 87.7664% ( 234) 00:12:57.951 9.326 - 9.387: 92.1552% ( 414) 00:12:57.951 9.387 - 9.448: 95.5687% ( 322) 00:12:57.951 9.448 - 9.509: 96.7349% ( 110) 00:12:57.951 9.509 - 9.570: 97.2437% ( 48) 00:12:57.951 9.570 - 9.630: 97.4133% ( 16) 00:12:57.951 9.630 - 9.691: 97.4981% ( 8) 00:12:57.951 9.691 - 9.752: 97.5618% ( 6) 00:12:57.951 9.752 - 9.813: 97.6254% ( 6) 00:12:57.951 9.813 - 9.874: 97.6572% ( 3) 00:12:57.951 9.874 - 9.935: 97.6678% ( 1) 00:12:57.951 9.935 - 9.996: 97.6996% ( 3) 00:12:57.951 9.996 - 10.057: 97.7208% ( 2) 00:12:57.951 10.118 - 10.179: 97.7314% ( 1) 00:12:57.951 10.179 - 10.240: 97.7526% ( 2) 00:12:57.951 10.240 - 10.301: 97.7632% ( 1) 00:12:57.951 10.301 - 10.362: 97.7844% ( 2) 00:12:57.951 10.484 - 10.545: 97.7950% ( 1) 00:12:57.951 10.545 - 10.606: 97.8056% ( 1) 00:12:57.951 10.606 - 10.667: 97.8268% ( 2) 00:12:57.952 10.728 - 10.789: 97.8374% ( 1) 00:12:57.952 10.789 - 10.850: 97.8480% ( 1) 00:12:57.952 10.850 - 10.910: 97.8586% ( 1) 00:12:57.952 10.910 - 10.971: 97.8904% ( 3) 00:12:57.952 11.154 - 11.215: 97.9222% ( 3) 00:12:57.952 11.337 - 11.398: 97.9434% ( 2) 00:12:57.952 11.398 - 11.459: 97.9646% ( 2) 00:12:57.952 11.520 - 11.581: 97.9752% ( 1) 00:12:57.952 11.581 - 11.642: 97.9858% ( 1) 00:12:57.952 11.642 - 11.703: 98.0176% ( 3) 00:12:57.952 11.764 - 11.825: 98.0388% ( 2) 00:12:57.952 11.825 - 11.886: 98.0494% ( 1) 00:12:57.952 11.886 - 11.947: 98.0706% ( 2) 00:12:57.952 11.947 - 12.008: 98.0812% ( 1) 00:12:57.952 12.008 - 12.069: 98.0918% ( 1) 00:12:57.952 12.069 - 12.130: 98.1024% ( 1) 00:12:57.952 12.130 - 12.190: 98.1342% ( 3) 00:12:57.952 12.190 - 12.251: 98.1554% ( 2) 00:12:57.952 12.312 - 12.373: 98.1660% ( 1) 00:12:57.952 12.434 - 12.495: 98.1766% ( 1) 00:12:57.952 12.617 - 12.678: 98.1872% ( 1) 00:12:57.952 12.678 - 12.739: 98.1978% ( 1) 00:12:57.952 12.739 - 12.800: 98.2084% ( 1) 00:12:57.952 12.800 - 12.861: 98.2190% ( 1) 00:12:57.952 12.922 - 12.983: 98.2296% ( 1) 00:12:57.952 13.044 - 13.105: 98.2402% ( 1) 00:12:57.952 13.166 - 13.227: 98.2614% ( 2) 00:12:57.952 13.227 - 13.288: 98.2720% ( 1) 00:12:57.952 13.410 - 13.470: 98.2826% ( 1) 00:12:57.952 13.470 - 13.531: 98.2932% ( 1) 00:12:57.952 13.836 - 13.897: 98.3038% ( 1) 00:12:57.952 13.958 - 14.019: 98.3144% ( 1) 00:12:57.952 14.324 - 14.385: 98.3250% ( 1) 00:12:57.952 14.507 - 14.568: 98.3356% ( 1) 00:12:57.952 14.690 - 14.750: 98.3568% ( 2) 00:12:57.952 14.750 - 14.811: 98.3780% ( 2) 00:12:57.952 15.055 - 15.116: 98.3886% ( 1) 00:12:57.952 15.299 - 15.360: 98.3992% ( 1) 00:12:57.952 15.421 - 15.482: 98.4310% ( 3) 00:12:57.952 15.482 - 15.543: 98.4416% ( 1) 00:12:57.952 15.543 - 15.604: 98.4522% ( 1) 00:12:57.952 15.726 - 15.848: 98.4628% ( 1) 00:12:57.952 15.970 - 16.091: 98.4734% ( 1) 00:12:57.952 16.213 - 16.335: 98.4946% ( 2) 00:12:57.952 16.335 - 16.457: 98.5158% ( 2) 00:12:57.952 16.457 - 16.579: 98.5371% ( 2) 00:12:57.952 16.579 - 16.701: 98.5477% ( 1) 00:12:57.952 16.823 - 16.945: 98.5583% ( 1) 00:12:57.952 16.945 - 17.067: 98.5689% ( 1) 00:12:57.952 17.432 - 17.554: 98.6007% ( 3) 00:12:57.952 17.554 - 17.676: 98.6431% ( 4) 00:12:57.952 17.676 - 17.798: 98.6537% ( 1) 00:12:57.952 17.798 - 17.920: 98.6855% ( 3) 00:12:57.952 17.920 - 18.042: 98.7173% ( 3) 00:12:57.952 18.042 - 18.164: 98.7809% ( 6) 00:12:57.952 18.164 - 18.286: 98.9399% ( 15) 00:12:57.952 18.286 - 18.408: 99.2473% ( 29) 00:12:57.952 18.408 - 18.530: 99.4699% ( 21) 00:12:57.952 18.530 - 18.651: 99.6184% ( 14) 00:12:57.952 18.651 - 18.773: 99.7244% ( 10) 00:12:57.952 18.895 - 19.017: 99.7774% ( 5) 00:12:57.952 19.017 - 19.139: 99.7880% ( 1) 00:12:57.952 19.261 - 19.383: 99.8092% ( 2) 00:12:57.952 20.480 - 20.602: 99.8198% ( 1) 00:12:57.952 21.455 - 21.577: 99.8304% ( 1) 00:12:57.952 21.821 - 21.943: 99.8410% ( 1) 00:12:57.952 22.065 - 22.187: 99.8516% ( 1) 00:12:57.952 22.552 - 22.674: 99.8622% ( 1) 00:12:57.952 22.796 - 22.918: 99.8834% ( 2) 00:12:57.952 22.918 - 23.040: 99.9152% ( 3) 00:12:57.952 23.162 - 23.284: 99.9258% ( 1) 00:12:57.952 23.284 - 23.406: 99.9364% ( 1) 00:12:57.952 24.869 - 24.990: 99.9470% ( 1) 00:12:57.952 26.088 - 26.210: 99.9576% ( 1) 00:12:57.952 28.038 - 28.160: 99.9682% ( 1) 00:12:57.952 33.646 - 33.890: 99.9788% ( 1) 00:12:57.952 76.069 - 76.556: 99.9894% ( 1) 00:12:57.952 382.293 - 384.244: 100.0000% ( 1) 00:12:57.952 00:12:57.952 00:12:57.952 real 0m1.808s 00:12:57.952 user 0m1.013s 00:12:57.952 sys 0m0.794s 00:12:57.952 19:13:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.952 19:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:57.952 ************************************ 00:12:57.952 END TEST nvme_overhead 00:12:57.952 ************************************ 00:12:57.952 19:13:35 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:57.952 19:13:35 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:12:57.952 19:13:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:57.952 19:13:35 -- common/autotest_common.sh@10 -- # set +x 00:12:57.952 ************************************ 00:12:57.952 START TEST nvme_arbitration 00:12:57.952 ************************************ 00:12:57.952 19:13:35 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:12:58.542 EAL: TSC is not safe to use in SMP mode 00:12:58.542 EAL: TSC is not invariant 00:12:58.542 [2024-02-14 19:13:35.956253] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:02.722 Initializing NVMe Controllers 00:13:02.722 Attaching to 0000:00:06.0 00:13:02.722 Attached to 0000:00:06.0 00:13:02.722 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:13:02.722 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:13:02.722 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:13:02.722 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:13:02.722 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:02.722 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:13:02.722 Initialization complete. Launching workers. 00:13:02.722 Starting thread on core 1 with urgent priority queue 00:13:02.722 Starting thread on core 2 with urgent priority queue 00:13:02.722 Starting thread on core 3 with urgent priority queue 00:13:02.722 Starting thread on core 0 with urgent priority queue 00:13:02.722 QEMU NVMe Ctrl (12340 ) core 0: 6002.67 IO/s 16.66 secs/100000 ios 00:13:02.722 QEMU NVMe Ctrl (12340 ) core 1: 6045.00 IO/s 16.54 secs/100000 ios 00:13:02.722 QEMU NVMe Ctrl (12340 ) core 2: 6026.00 IO/s 16.59 secs/100000 ios 00:13:02.722 QEMU NVMe Ctrl (12340 ) core 3: 6038.00 IO/s 16.56 secs/100000 ios 00:13:02.722 ======================================================== 00:13:02.722 00:13:02.722 00:13:02.722 real 0m4.492s 00:13:02.722 user 0m12.718s 00:13:02.722 sys 0m0.833s 00:13:02.722 19:13:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.722 19:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 ************************************ 00:13:02.722 END TEST nvme_arbitration 00:13:02.722 ************************************ 00:13:02.722 19:13:39 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:02.722 19:13:39 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:02.722 19:13:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:02.722 19:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:02.722 ************************************ 00:13:02.722 START TEST nvme_single_aen 00:13:02.722 ************************************ 00:13:02.722 19:13:39 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:13:03.287 EAL: TSC is not safe to use in SMP mode 00:13:03.287 EAL: TSC is not invariant 00:13:03.287 [2024-02-14 19:13:40.478015] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:03.287 Asynchronous Event Request test 00:13:03.287 Attaching to 0000:00:06.0 00:13:03.287 Attached to 0000:00:06.0 00:13:03.288 Reset controller to setup AER completions for this process 00:13:03.288 Registering asynchronous event callbacks... 00:13:03.288 Getting orig temperature thresholds of all controllers 00:13:03.288 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:13:03.288 Setting all controllers temperature threshold low to trigger AER 00:13:03.288 Waiting for all controllers temperature threshold to be set lower 00:13:03.288 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:13:03.288 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:13:03.288 Waiting for all controllers to trigger AER and reset threshold 00:13:03.288 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:13:03.288 Cleaning up... 00:13:03.288 00:13:03.288 real 0m0.834s 00:13:03.288 user 0m0.016s 00:13:03.288 sys 0m0.818s 00:13:03.288 19:13:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:03.288 19:13:40 -- common/autotest_common.sh@10 -- # set +x 00:13:03.288 ************************************ 00:13:03.288 END TEST nvme_single_aen 00:13:03.288 ************************************ 00:13:03.288 19:13:40 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:13:03.288 19:13:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:03.288 19:13:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:03.288 19:13:40 -- common/autotest_common.sh@10 -- # set +x 00:13:03.288 ************************************ 00:13:03.288 START TEST nvme_doorbell_aers 00:13:03.288 ************************************ 00:13:03.288 19:13:40 -- common/autotest_common.sh@1102 -- # nvme_doorbell_aers 00:13:03.288 19:13:40 -- nvme/nvme.sh@70 -- # bdfs=() 00:13:03.288 19:13:40 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:13:03.288 19:13:40 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:13:03.288 19:13:40 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:13:03.288 19:13:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:03.288 19:13:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:13:03.288 19:13:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:03.288 19:13:40 -- common/autotest_common.sh@1497 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:03.288 19:13:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:03.288 19:13:40 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:13:03.288 19:13:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:13:03.288 19:13:40 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:13:03.288 19:13:40 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:04.222 EAL: TSC is not safe to use in SMP mode 00:13:04.222 EAL: TSC is not invariant 00:13:04.222 [2024-02-14 19:13:41.403805] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:04.222 Executing: test_write_invalid_db 00:13:04.222 Waiting for AER completion... 00:13:04.222 Asynchronous Event received. 00:13:04.222 Error Informaton Log Page received. 00:13:04.222 Success: test_write_invalid_db 00:13:04.222 00:13:04.222 Executing: test_invalid_db_write_overflow_sq 00:13:04.222 Waiting for AER completion... 00:13:04.222 Asynchronous Event received. 00:13:04.222 Error Informaton Log Page received. 00:13:04.222 Success: test_invalid_db_write_overflow_sq 00:13:04.222 00:13:04.222 Executing: test_invalid_db_write_overflow_cq 00:13:04.222 Waiting for AER completion... 00:13:04.222 Asynchronous Event received. 00:13:04.222 Error Informaton Log Page received. 00:13:04.222 Success: test_invalid_db_write_overflow_cq 00:13:04.222 00:13:04.222 00:13:04.222 real 0m0.866s 00:13:04.222 user 0m0.037s 00:13:04.222 sys 0m0.846s 00:13:04.222 ************************************ 00:13:04.222 END TEST nvme_doorbell_aers 00:13:04.222 ************************************ 00:13:04.222 19:13:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.222 19:13:41 -- common/autotest_common.sh@10 -- # set +x 00:13:04.222 19:13:41 -- nvme/nvme.sh@97 -- # uname 00:13:04.222 19:13:41 -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:13:04.222 19:13:41 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:04.222 19:13:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:04.222 19:13:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:04.222 19:13:41 -- common/autotest_common.sh@10 -- # set +x 00:13:04.222 ************************************ 00:13:04.222 START TEST bdev_nvme_reset_stuck_adm_cmd 00:13:04.222 ************************************ 00:13:04.222 19:13:41 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:13:04.481 * Looking for test storage... 00:13:04.481 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:13:04.481 19:13:41 -- common/autotest_common.sh@1507 -- # bdfs=() 00:13:04.481 19:13:41 -- common/autotest_common.sh@1507 -- # local bdfs 00:13:04.481 19:13:41 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:13:04.481 19:13:41 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:13:04.481 19:13:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:04.481 19:13:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:13:04.481 19:13:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:04.481 19:13:41 -- common/autotest_common.sh@1497 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:04.481 19:13:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:04.481 19:13:41 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:13:04.481 19:13:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:13:04.481 19:13:41 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=55722 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 55722 00:13:04.481 19:13:41 -- common/autotest_common.sh@817 -- # '[' -z 55722 ']' 00:13:04.481 19:13:41 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:13:04.481 19:13:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.481 19:13:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:04.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.481 19:13:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.481 19:13:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:04.481 19:13:41 -- common/autotest_common.sh@10 -- # set +x 00:13:04.481 [2024-02-14 19:13:41.781829] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:04.481 [2024-02-14 19:13:41.782019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:05.417 EAL: TSC is not safe to use in SMP mode 00:13:05.417 EAL: TSC is not invariant 00:13:05.417 [2024-02-14 19:13:42.556162] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.417 [2024-02-14 19:13:42.678145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:05.417 [2024-02-14 19:13:42.701809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.417 [2024-02-14 19:13:42.702024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.417 [2024-02-14 19:13:42.796304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.417 [2024-02-14 19:13:42.816908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.353 19:13:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:06.353 19:13:43 -- common/autotest_common.sh@850 -- # return 0 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:13:06.353 19:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.353 19:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.353 [2024-02-14 19:13:43.584722] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:06.353 nvme0n1 00:13:06.353 19:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:13:06.353 19:13:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.353 19:13:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.353 true 00:13:06.353 19:13:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1707938023 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=55732 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:06.353 19:13:43 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:13:08.884 19:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.884 19:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:08.884 [2024-02-14 19:13:45.733993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:13:08.884 [2024-02-14 19:13:45.734147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:13:08.884 [2024-02-14 19:13:45.734161] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:08.884 [2024-02-14 19:13:45.734172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.884 [2024-02-14 19:13:45.735222] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:08.884 19:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.884 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 55732 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 55732 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 55732 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:13:08.884 19:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:08.884 19:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:08.884 19:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.U1FrcY 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:13:08.884 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.get74o 00:13:08.885 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:13:08.885 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:13:08.885 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:13:08.885 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:13:08.885 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:13:08.885 19:13:45 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 55722 00:13:08.885 19:13:45 -- common/autotest_common.sh@924 -- # '[' -z 55722 ']' 00:13:08.885 19:13:45 -- common/autotest_common.sh@928 -- # kill -0 55722 00:13:08.885 19:13:45 -- common/autotest_common.sh@929 -- # uname 00:13:08.885 19:13:45 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:13:08.885 19:13:45 -- common/autotest_common.sh@932 -- # ps -c -o command 55722 00:13:08.885 19:13:45 -- common/autotest_common.sh@932 -- # tail -1 00:13:08.885 19:13:45 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:13:08.885 19:13:45 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:13:08.885 killing process with pid 55722 00:13:08.885 19:13:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 55722' 00:13:08.885 19:13:45 -- common/autotest_common.sh@943 -- # kill 55722 00:13:08.885 19:13:45 -- common/autotest_common.sh@948 -- # wait 55722 00:13:08.885 19:13:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:13:08.885 19:13:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:13:08.885 00:13:08.885 real 0m4.638s 00:13:08.885 user 0m14.279s 00:13:08.885 sys 0m1.232s 00:13:08.885 19:13:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.885 ************************************ 00:13:08.885 END TEST bdev_nvme_reset_stuck_adm_cmd 00:13:08.885 ************************************ 00:13:08.885 19:13:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.885 19:13:46 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:13:08.885 19:13:46 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:13:08.885 19:13:46 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:08.885 19:13:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:08.885 19:13:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.885 ************************************ 00:13:08.885 START TEST nvme_fio 00:13:08.885 ************************************ 00:13:08.885 19:13:46 -- common/autotest_common.sh@1102 -- # nvme_fio_test 00:13:08.885 19:13:46 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:13:08.885 19:13:46 -- nvme/nvme.sh@32 -- # ran_fio=false 00:13:08.885 19:13:46 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:13:08.885 19:13:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:08.885 19:13:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:13:08.885 19:13:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:08.885 19:13:46 -- common/autotest_common.sh@1497 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:08.885 19:13:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:08.885 19:13:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:13:08.885 19:13:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:13:08.885 19:13:46 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:13:08.885 19:13:46 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:13:08.885 19:13:46 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:13:08.885 19:13:46 -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:08.885 19:13:46 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:13:09.821 EAL: TSC is not safe to use in SMP mode 00:13:09.821 EAL: TSC is not invariant 00:13:09.821 [2024-02-14 19:13:47.005497] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:09.821 19:13:47 -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:09.821 19:13:47 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:13:10.388 EAL: TSC is not safe to use in SMP mode 00:13:10.388 EAL: TSC is not invariant 00:13:10.388 [2024-02-14 19:13:47.799973] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:10.647 19:13:47 -- nvme/nvme.sh@41 -- # bs=4096 00:13:10.647 19:13:47 -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:10.647 19:13:47 -- common/autotest_common.sh@1337 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:10.647 19:13:47 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:13:10.647 19:13:47 -- common/autotest_common.sh@1316 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.647 19:13:47 -- common/autotest_common.sh@1316 -- # local sanitizers 00:13:10.647 19:13:47 -- common/autotest_common.sh@1317 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.647 19:13:47 -- common/autotest_common.sh@1318 -- # shift 00:13:10.647 19:13:47 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:13:10.647 19:13:47 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # grep libasan 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # asan_lib= 00:13:10.647 19:13:47 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:13:10.647 19:13:47 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # grep libclang_rt.asan 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:13:10.647 19:13:47 -- common/autotest_common.sh@1322 -- # asan_lib= 00:13:10.647 19:13:47 -- common/autotest_common.sh@1323 -- # [[ -n '' ]] 00:13:10.647 19:13:47 -- common/autotest_common.sh@1329 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:13:10.647 19:13:47 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:13:10.647 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:13:10.647 fio-3.35 00:13:10.647 Starting 1 thread 00:13:11.610 EAL: TSC is not safe to use in SMP mode 00:13:11.610 EAL: TSC is not invariant 00:13:11.610 [2024-02-14 19:13:48.714761] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:15.824 00:13:15.824 test: (groupid=0, jobs=1): err= 0: pid=102851: Wed Feb 14 19:13:52 2024 00:13:15.824 read: IOPS=47.6k, BW=186MiB/s (195MB/s)(372MiB/2001msec) 00:13:15.824 slat (nsec): min=390, max=44156, avg=509.30, stdev=291.51 00:13:15.824 clat (usec): min=274, max=4397, avg=1344.96, stdev=208.80 00:13:15.824 lat (usec): min=275, max=4442, avg=1345.47, stdev=208.84 00:13:15.824 clat percentiles (usec): 00:13:15.824 | 1.00th=[ 1004], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1205], 00:13:15.824 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1369], 00:13:15.824 | 70.00th=[ 1418], 80.00th=[ 1450], 90.00th=[ 1516], 95.00th=[ 1598], 00:13:15.824 | 99.00th=[ 2089], 99.50th=[ 2442], 99.90th=[ 3359], 99.95th=[ 3720], 00:13:15.824 | 99.99th=[ 4293] 00:13:15.824 bw ( KiB/s): min=180329, max=194710, per=99.77%, avg=189894.00, stdev=8283.60, samples=3 00:13:15.824 iops : min=45082, max=48677, avg=47473.00, stdev=2070.68, samples=3 00:13:15.824 write: IOPS=47.5k, BW=185MiB/s (194MB/s)(371MiB/2001msec); 0 zone resets 00:13:15.824 slat (nsec): min=412, max=16957, avg=851.64, stdev=368.47 00:13:15.824 clat (usec): min=266, max=4372, avg=1344.40, stdev=210.08 00:13:15.824 lat (usec): min=269, max=4377, avg=1345.25, stdev=210.13 00:13:15.824 clat percentiles (usec): 00:13:15.824 | 1.00th=[ 996], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1205], 00:13:15.824 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1369], 00:13:15.824 | 70.00th=[ 1418], 80.00th=[ 1450], 90.00th=[ 1516], 95.00th=[ 1598], 00:13:15.824 | 99.00th=[ 2057], 99.50th=[ 2442], 99.90th=[ 3458], 99.95th=[ 3818], 00:13:15.824 | 99.99th=[ 4293] 00:13:15.824 bw ( KiB/s): min=179011, max=193629, per=99.38%, avg=188673.00, stdev=8368.47, samples=3 00:13:15.824 iops : min=44752, max=48407, avg=47167.67, stdev=2092.27, samples=3 00:13:15.824 lat (usec) : 500=0.08%, 750=0.25%, 1000=0.66% 00:13:15.824 lat (msec) : 2=97.81%, 4=1.17%, 10=0.03% 00:13:15.824 cpu : usr=100.00%, sys=0.00%, ctx=23, majf=0, minf=3 00:13:15.824 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:13:15.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:15.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:15.824 issued rwts: total=95211,94974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:15.824 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:15.824 00:13:15.824 Run status group 0 (all jobs): 00:13:15.824 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=372MiB (390MB), run=2001-2001msec 00:13:15.824 WRITE: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=371MiB (389MB), run=2001-2001msec 00:13:15.824 19:13:53 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:15.824 19:13:53 -- nvme/nvme.sh@46 -- # true 00:13:15.824 00:13:15.824 real 0m6.902s 00:13:15.824 user 0m3.923s 00:13:15.824 sys 0m2.908s 00:13:15.824 19:13:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.824 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.824 ************************************ 00:13:15.824 END TEST nvme_fio 00:13:15.825 ************************************ 00:13:15.825 00:13:15.825 real 0m31.469s 00:13:15.825 user 0m35.023s 00:13:15.825 sys 0m16.284s 00:13:15.825 19:13:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.825 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.825 ************************************ 00:13:15.825 END TEST nvme 00:13:15.825 ************************************ 00:13:15.825 19:13:53 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:13:15.825 19:13:53 -- spdk/autotest.sh@227 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:15.825 19:13:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:15.825 19:13:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:15.825 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:15.825 ************************************ 00:13:15.825 START TEST nvme_scc 00:13:15.825 ************************************ 00:13:15.825 19:13:53 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:16.083 * Looking for test storage... 00:13:16.083 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:13:16.083 19:13:53 -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:16.083 19:13:53 -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:16.083 19:13:53 -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:16.083 19:13:53 -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:13:16.083 19:13:53 -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:16.083 19:13:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.083 19:13:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.083 19:13:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.083 19:13:53 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:16.084 19:13:53 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:16.084 19:13:53 -- paths/export.sh@4 -- # export PATH 00:13:16.084 19:13:53 -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:16.084 19:13:53 -- nvme/functions.sh@10 -- # ctrls=() 00:13:16.084 19:13:53 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:16.084 19:13:53 -- nvme/functions.sh@11 -- # nvmes=() 00:13:16.084 19:13:53 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:16.084 19:13:53 -- nvme/functions.sh@12 -- # bdfs=() 00:13:16.084 19:13:53 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:16.084 19:13:53 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:16.084 19:13:53 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:16.084 19:13:53 -- nvme/functions.sh@14 -- # nvme_name= 00:13:16.084 19:13:53 -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.084 19:13:53 -- nvme/nvme_scc.sh@12 -- # uname 00:13:16.084 19:13:53 -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:13:16.084 19:13:53 -- nvme/nvme_scc.sh@12 -- # exit 0 00:13:16.084 00:13:16.084 real 0m0.189s 00:13:16.084 user 0m0.093s 00:13:16.084 sys 0m0.180s 00:13:16.084 19:13:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.084 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 ************************************ 00:13:16.084 END TEST nvme_scc 00:13:16.084 ************************************ 00:13:16.084 19:13:53 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:13:16.084 19:13:53 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:13:16.084 19:13:53 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:13:16.084 19:13:53 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:13:16.084 19:13:53 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:13:16.084 19:13:53 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:16.084 19:13:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:16.084 19:13:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:16.084 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.084 ************************************ 00:13:16.084 START TEST nvme_rpc 00:13:16.084 ************************************ 00:13:16.084 19:13:53 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:16.343 * Looking for test storage... 00:13:16.343 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:16.343 19:13:53 -- common/autotest_common.sh@1507 -- # bdfs=() 00:13:16.343 19:13:53 -- common/autotest_common.sh@1507 -- # local bdfs 00:13:16.343 19:13:53 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:13:16.343 19:13:53 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:13:16.343 19:13:53 -- common/autotest_common.sh@1496 -- # bdfs=() 00:13:16.343 19:13:53 -- common/autotest_common.sh@1496 -- # local bdfs 00:13:16.343 19:13:53 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:16.343 19:13:53 -- common/autotest_common.sh@1497 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:16.343 19:13:53 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:13:16.343 19:13:53 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:13:16.343 19:13:53 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:13:16.343 19:13:53 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=55940 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 55940 00:13:16.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.343 19:13:53 -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:16.343 19:13:53 -- common/autotest_common.sh@817 -- # '[' -z 55940 ']' 00:13:16.343 19:13:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.343 19:13:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.343 19:13:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.343 19:13:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.343 19:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:16.343 [2024-02-14 19:13:53.649761] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:16.343 [2024-02-14 19:13:53.649993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:17.280 EAL: TSC is not safe to use in SMP mode 00:13:17.280 EAL: TSC is not invariant 00:13:17.280 [2024-02-14 19:13:54.387828] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:17.280 [2024-02-14 19:13:54.517177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:17.280 [2024-02-14 19:13:54.517439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.280 [2024-02-14 19:13:54.517431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.540 19:13:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.540 19:13:54 -- common/autotest_common.sh@850 -- # return 0 00:13:17.540 19:13:54 -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:13:17.799 [2024-02-14 19:13:55.059680] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:13:17.799 Nvme0n1 00:13:17.799 19:13:55 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:17.799 19:13:55 -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:18.058 request: 00:13:18.058 { 00:13:18.058 "filename": "non_existing_file", 00:13:18.058 "bdev_name": "Nvme0n1", 00:13:18.058 "method": "bdev_nvme_apply_firmware", 00:13:18.058 "req_id": 1 00:13:18.058 } 00:13:18.058 Got JSON-RPC error response 00:13:18.058 response: 00:13:18.058 { 00:13:18.058 "code": -32603, 00:13:18.058 "message": "open file failed." 00:13:18.058 } 00:13:18.058 19:13:55 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:18.058 19:13:55 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:18.058 19:13:55 -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:18.318 19:13:55 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:18.318 19:13:55 -- nvme/nvme_rpc.sh@40 -- # killprocess 55940 00:13:18.318 19:13:55 -- common/autotest_common.sh@924 -- # '[' -z 55940 ']' 00:13:18.318 19:13:55 -- common/autotest_common.sh@928 -- # kill -0 55940 00:13:18.318 19:13:55 -- common/autotest_common.sh@929 -- # uname 00:13:18.318 19:13:55 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:13:18.318 19:13:55 -- common/autotest_common.sh@932 -- # ps -c -o command 55940 00:13:18.318 19:13:55 -- common/autotest_common.sh@932 -- # tail -1 00:13:18.318 19:13:55 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:13:18.318 killing process with pid 55940 00:13:18.318 19:13:55 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:13:18.318 19:13:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 55940' 00:13:18.318 19:13:55 -- common/autotest_common.sh@943 -- # kill 55940 00:13:18.318 19:13:55 -- common/autotest_common.sh@948 -- # wait 55940 00:13:18.578 00:13:18.578 real 0m2.509s 00:13:18.578 user 0m4.052s 00:13:18.578 sys 0m1.141s 00:13:18.578 19:13:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.578 19:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:18.578 ************************************ 00:13:18.578 END TEST nvme_rpc 00:13:18.578 ************************************ 00:13:18.578 19:13:55 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:18.578 19:13:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:18.578 19:13:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:18.578 19:13:55 -- common/autotest_common.sh@10 -- # set +x 00:13:18.578 ************************************ 00:13:18.578 START TEST nvme_rpc_timeouts 00:13:18.578 ************************************ 00:13:18.578 19:13:55 -- common/autotest_common.sh@1102 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:18.836 * Looking for test storage... 00:13:18.837 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_55969 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_55969 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=55996 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 55996 00:13:18.837 19:13:56 -- common/autotest_common.sh@817 -- # '[' -z 55996 ']' 00:13:18.837 19:13:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.837 19:13:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:18.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.837 19:13:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.837 19:13:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:18.837 19:13:56 -- common/autotest_common.sh@10 -- # set +x 00:13:18.837 19:13:56 -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:18.837 [2024-02-14 19:13:56.174255] Starting SPDK v24.05-pre git sha1 aa824ae66 / DPDK 23.11.0 initialization... 00:13:18.837 [2024-02-14 19:13:56.174468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:13:19.772 EAL: TSC is not safe to use in SMP mode 00:13:19.772 EAL: TSC is not invariant 00:13:19.772 [2024-02-14 19:13:56.927903] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.772 [2024-02-14 19:13:57.039078] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:19.772 [2024-02-14 19:13:57.039381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.772 [2024-02-14 19:13:57.039367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.772 19:13:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:19.772 19:13:57 -- common/autotest_common.sh@850 -- # return 0 00:13:19.772 Checking default timeout settings: 00:13:19.772 19:13:57 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:19.772 19:13:57 -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:20.340 Making settings changes with rpc: 00:13:20.340 19:13:57 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:20.340 19:13:57 -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:20.340 Check default vs. modified settings: 00:13:20.340 19:13:57 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:20.340 19:13:57 -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_55969 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_55969 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:20.907 Setting action_on_timeout is changed as expected. 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_55969 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_55969 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:20.907 Setting timeout_us is changed as expected. 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:20.907 19:13:58 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_55969 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_55969 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:20.908 Setting timeout_admin_us is changed as expected. 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_55969 /tmp/settings_modified_55969 00:13:20.908 19:13:58 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 55996 00:13:20.908 19:13:58 -- common/autotest_common.sh@924 -- # '[' -z 55996 ']' 00:13:20.908 19:13:58 -- common/autotest_common.sh@928 -- # kill -0 55996 00:13:20.908 19:13:58 -- common/autotest_common.sh@929 -- # uname 00:13:20.908 19:13:58 -- common/autotest_common.sh@929 -- # '[' FreeBSD = Linux ']' 00:13:20.908 19:13:58 -- common/autotest_common.sh@932 -- # ps -c -o command 55996 00:13:20.908 19:13:58 -- common/autotest_common.sh@932 -- # tail -1 00:13:20.908 19:13:58 -- common/autotest_common.sh@932 -- # process_name=spdk_tgt 00:13:20.908 19:13:58 -- common/autotest_common.sh@934 -- # '[' spdk_tgt = sudo ']' 00:13:20.908 killing process with pid 55996 00:13:20.908 19:13:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 55996' 00:13:20.908 19:13:58 -- common/autotest_common.sh@943 -- # kill 55996 00:13:20.908 19:13:58 -- common/autotest_common.sh@948 -- # wait 55996 00:13:21.166 RPC TIMEOUT SETTING TEST PASSED. 00:13:21.166 19:13:58 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:21.166 00:13:21.166 real 0m2.451s 00:13:21.166 user 0m3.925s 00:13:21.166 sys 0m1.103s 00:13:21.166 ************************************ 00:13:21.166 END TEST nvme_rpc_timeouts 00:13:21.166 ************************************ 00:13:21.166 19:13:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.166 19:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.166 19:13:58 -- spdk/autotest.sh@251 -- # '[' 0 -eq 0 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@251 -- # uname -s 00:13:21.166 19:13:58 -- spdk/autotest.sh@251 -- # '[' FreeBSD = Linux ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:13:21.166 19:13:58 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@268 -- # timing_exit lib 00:13:21.166 19:13:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:21.166 19:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.166 19:13:58 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:13:21.166 19:13:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:13:21.167 19:13:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:13:21.167 19:13:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:13:21.167 19:13:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:13:21.167 19:13:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:13:21.167 19:13:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:13:21.167 19:13:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:13:21.167 19:13:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:13:21.167 19:13:58 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:13:21.167 19:13:58 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:13:21.167 19:13:58 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:13:21.167 19:13:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:21.167 19:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.167 19:13:58 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:13:21.167 19:13:58 -- common/autotest_common.sh@1369 -- # local autotest_es=0 00:13:21.167 19:13:58 -- common/autotest_common.sh@1370 -- # xtrace_disable 00:13:21.167 19:13:58 -- common/autotest_common.sh@10 -- # set +x 00:13:21.734 setup.sh cleanup function not yet supported on FreeBSD 00:13:21.734 19:13:59 -- common/autotest_common.sh@1434 -- # return 0 00:13:21.734 19:13:59 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:13:21.734 19:13:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:21.734 19:13:59 -- common/autotest_common.sh@10 -- # set +x 00:13:21.993 19:13:59 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:13:21.993 19:13:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:21.993 19:13:59 -- common/autotest_common.sh@10 -- # set +x 00:13:21.993 19:13:59 -- spdk/autotest.sh@390 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:13:21.993 19:13:59 -- spdk/autotest.sh@392 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:13:21.993 19:13:59 -- spdk/autotest.sh@394 -- # hash lcov 00:13:21.993 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 394: hash: lcov: not found 00:13:21.993 19:13:59 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.993 19:13:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:21.993 19:13:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.993 19:13:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.993 19:13:59 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:21.993 19:13:59 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:21.993 19:13:59 -- paths/export.sh@4 -- $ export PATH 00:13:21.993 19:13:59 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:13:21.993 19:13:59 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:13:21.993 19:13:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:13:21.993 19:13:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707938039.XXXXXX 00:13:21.993 19:13:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707938039.XXXXXX.5QCZfqdH 00:13:21.993 19:13:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:13:21.993 19:13:59 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:13:21.993 19:13:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:13:21.993 19:13:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:21.993 19:13:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:21.993 19:13:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:13:21.993 19:13:59 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:13:21.993 19:13:59 -- common/autotest_common.sh@10 -- $ set +x 00:13:22.252 19:13:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:13:22.252 19:13:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:13:22.252 19:13:59 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:13:22.252 19:13:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:13:22.252 19:13:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:13:22.252 19:13:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:13:22.253 19:13:59 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:13:22.253 19:13:59 -- common/autotest_common.sh@710 -- $ xtrace_disable 00:13:22.253 19:13:59 -- common/autotest_common.sh@10 -- $ set +x 00:13:22.253 19:13:59 -- spdk/autopackage.sh@25 -- $ get_config_params 00:13:22.253 19:13:59 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:13:22.253 19:13:59 -- spdk/autopackage.sh@25 -- $ sed s/--enable-debug//g 00:13:22.253 19:13:59 -- common/autotest_common.sh@10 -- $ set +x 00:13:22.253 19:13:59 -- spdk/autopackage.sh@25 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:13:22.253 19:13:59 -- spdk/autopackage.sh@26 -- $ uname -s 00:13:22.253 19:13:59 -- spdk/autopackage.sh@26 -- $ '[' FreeBSD = Linux ']' 00:13:22.253 19:13:59 -- spdk/autopackage.sh@35 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio 00:13:22.253 Notice: Vhost, rte_vhost library, virtio, and fuse 00:13:22.253 are only supported on Linux. Turning off default feature. 00:13:22.511 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:22.511 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:22.769 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:13:22.769 Using 'verbs' RDMA provider 00:13:32.740 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:13:40.852 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:13:41.110 Creating mk/config.mk...done. 00:13:41.110 Creating mk/cc.flags.mk...done. 00:13:41.110 Type 'gmake' to build. 00:13:41.110 19:14:18 -- spdk/autopackage.sh@37 -- $ gmake -j10 00:13:41.368 gmake[1]: Nothing to be done for 'all'. 00:13:41.368 ps: stdin: not a terminal 00:13:46.639 The Meson build system 00:13:46.639 Version: 1.3.1 00:13:46.639 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:13:46.639 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:46.639 Build type: native build 00:13:46.639 Program cat found: YES (/bin/cat) 00:13:46.639 Project name: DPDK 00:13:46.639 Project version: 23.11.0 00:13:46.639 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:13:46.639 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:13:46.639 Host machine cpu family: x86_64 00:13:46.639 Host machine cpu: x86_64 00:13:46.639 Message: ## Building in Developer Mode ## 00:13:46.639 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:13:46.639 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:46.639 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:46.639 Program python3 found: YES (/usr/local/bin/python3.9) 00:13:46.639 Program cat found: YES (/bin/cat) 00:13:46.639 Compiler for C supports arguments -march=native: YES 00:13:46.639 Checking for size of "void *" : 8 00:13:46.639 Checking for size of "void *" : 8 (cached) 00:13:46.639 Library m found: YES 00:13:46.639 Library numa found: NO 00:13:46.639 Library fdt found: NO 00:13:46.639 Library execinfo found: YES 00:13:46.639 Has header "execinfo.h" : YES 00:13:46.639 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:13:46.639 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:46.639 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:46.639 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:46.639 Run-time dependency openssl found: YES 3.0.13 00:13:46.639 Run-time dependency libpcap found: NO (tried pkgconfig) 00:13:46.639 Library pcap found: YES 00:13:46.639 Has header "pcap.h" with dependency -lpcap: YES 00:13:46.639 Compiler for C supports arguments -Wcast-qual: YES 00:13:46.639 Compiler for C supports arguments -Wdeprecated: YES 00:13:46.639 Compiler for C supports arguments -Wformat: YES 00:13:46.639 Compiler for C supports arguments -Wformat-nonliteral: YES 00:13:46.639 Compiler for C supports arguments -Wformat-security: YES 00:13:46.639 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:46.640 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:46.640 Compiler for C supports arguments -Wnested-externs: YES 00:13:46.640 Compiler for C supports arguments -Wold-style-definition: YES 00:13:46.640 Compiler for C supports arguments -Wpointer-arith: YES 00:13:46.640 Compiler for C supports arguments -Wsign-compare: YES 00:13:46.640 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:46.640 Compiler for C supports arguments -Wundef: YES 00:13:46.640 Compiler for C supports arguments -Wwrite-strings: YES 00:13:46.640 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:46.640 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:13:46.640 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:46.640 Compiler for C supports arguments -mavx512f: YES 00:13:46.640 Checking if "AVX512 checking" compiles: YES 00:13:46.640 Fetching value of define "__SSE4_2__" : 1 00:13:46.640 Fetching value of define "__AES__" : 1 00:13:46.640 Fetching value of define "__AVX__" : 1 00:13:46.640 Fetching value of define "__AVX2__" : 1 00:13:46.640 Fetching value of define "__AVX512BW__" : 1 00:13:46.640 Fetching value of define "__AVX512CD__" : 1 00:13:46.640 Fetching value of define "__AVX512DQ__" : 1 00:13:46.640 Fetching value of define "__AVX512F__" : 1 00:13:46.640 Fetching value of define "__AVX512VL__" : 1 00:13:46.640 Fetching value of define "__PCLMUL__" : 1 00:13:46.640 Fetching value of define "__RDRND__" : 1 00:13:46.640 Fetching value of define "__RDSEED__" : 1 00:13:46.640 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:46.640 Fetching value of define "__znver1__" : (undefined) 00:13:46.640 Fetching value of define "__znver2__" : (undefined) 00:13:46.640 Fetching value of define "__znver3__" : (undefined) 00:13:46.640 Fetching value of define "__znver4__" : (undefined) 00:13:46.640 Compiler for C supports arguments -Wno-format-truncation: NO 00:13:46.640 Message: lib/log: Defining dependency "log" 00:13:46.640 Message: lib/kvargs: Defining dependency "kvargs" 00:13:46.640 Message: lib/telemetry: Defining dependency "telemetry" 00:13:46.640 Checking if "Detect argument count for CPU_OR" compiles: YES 00:13:46.640 Checking for function "getentropy" : YES 00:13:46.640 Message: lib/eal: Defining dependency "eal" 00:13:46.640 Message: lib/ring: Defining dependency "ring" 00:13:46.640 Message: lib/rcu: Defining dependency "rcu" 00:13:46.640 Message: lib/mempool: Defining dependency "mempool" 00:13:46.640 Message: lib/mbuf: Defining dependency "mbuf" 00:13:46.640 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:46.640 Fetching value of define "__AVX512F__" : 1 (cached) 00:13:46.640 Fetching value of define "__AVX512BW__" : 1 (cached) 00:13:46.640 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:13:46.640 Fetching value of define "__AVX512VL__" : 1 (cached) 00:13:46.640 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:13:46.640 Compiler for C supports arguments -mpclmul: YES 00:13:46.640 Compiler for C supports arguments -maes: YES 00:13:46.640 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:46.640 Compiler for C supports arguments -mavx512bw: YES 00:13:46.640 Compiler for C supports arguments -mavx512dq: YES 00:13:46.640 Compiler for C supports arguments -mavx512vl: YES 00:13:46.640 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:46.640 Compiler for C supports arguments -mavx2: YES 00:13:46.640 Compiler for C supports arguments -mavx: YES 00:13:46.640 Message: lib/net: Defining dependency "net" 00:13:46.640 Message: lib/meter: Defining dependency "meter" 00:13:46.640 Message: lib/ethdev: Defining dependency "ethdev" 00:13:46.640 Message: lib/pci: Defining dependency "pci" 00:13:46.640 Message: lib/cmdline: Defining dependency "cmdline" 00:13:46.640 Message: lib/hash: Defining dependency "hash" 00:13:46.640 Message: lib/timer: Defining dependency "timer" 00:13:46.640 Message: lib/compressdev: Defining dependency "compressdev" 00:13:46.640 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:46.640 Message: lib/dmadev: Defining dependency "dmadev" 00:13:46.640 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:46.640 Message: lib/reorder: Defining dependency "reorder" 00:13:46.640 Message: lib/security: Defining dependency "security" 00:13:46.640 Has header "linux/userfaultfd.h" : NO 00:13:46.640 Has header "linux/vduse.h" : NO 00:13:46.640 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:13:46.640 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:46.640 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:46.640 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:46.640 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:46.640 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:46.640 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:46.640 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:13:46.640 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:46.640 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:46.640 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:46.640 Program doxygen found: YES (/usr/local/bin/doxygen) 00:13:46.640 Configuring doxy-api-html.conf using configuration 00:13:46.640 Configuring doxy-api-man.conf using configuration 00:13:46.640 Program mandb found: NO 00:13:46.640 Program sphinx-build found: NO 00:13:46.640 Configuring rte_build_config.h using configuration 00:13:46.640 Message: 00:13:46.640 ================= 00:13:46.640 Applications Enabled 00:13:46.640 ================= 00:13:46.640 00:13:46.640 apps: 00:13:46.640 00:13:46.640 00:13:46.640 Message: 00:13:46.640 ================= 00:13:46.640 Libraries Enabled 00:13:46.640 ================= 00:13:46.640 00:13:46.640 libs: 00:13:46.640 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:46.640 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:46.640 cryptodev, dmadev, reorder, security, 00:13:46.640 00:13:46.640 Message: 00:13:46.640 =============== 00:13:46.640 Drivers Enabled 00:13:46.640 =============== 00:13:46.640 00:13:46.640 common: 00:13:46.640 00:13:46.640 bus: 00:13:46.640 pci, vdev, 00:13:46.640 mempool: 00:13:46.640 ring, 00:13:46.640 dma: 00:13:46.640 00:13:46.640 net: 00:13:46.640 00:13:46.640 crypto: 00:13:46.640 00:13:46.640 compress: 00:13:46.640 00:13:46.640 00:13:46.640 Message: 00:13:46.640 ================= 00:13:46.640 Content Skipped 00:13:46.640 ================= 00:13:46.640 00:13:46.640 apps: 00:13:46.640 dumpcap: explicitly disabled via build config 00:13:46.640 graph: explicitly disabled via build config 00:13:46.640 pdump: explicitly disabled via build config 00:13:46.640 proc-info: explicitly disabled via build config 00:13:46.640 test-acl: explicitly disabled via build config 00:13:46.640 test-bbdev: explicitly disabled via build config 00:13:46.640 test-cmdline: explicitly disabled via build config 00:13:46.640 test-compress-perf: explicitly disabled via build config 00:13:46.640 test-crypto-perf: explicitly disabled via build config 00:13:46.640 test-dma-perf: explicitly disabled via build config 00:13:46.640 test-eventdev: explicitly disabled via build config 00:13:46.640 test-fib: explicitly disabled via build config 00:13:46.640 test-flow-perf: explicitly disabled via build config 00:13:46.640 test-gpudev: explicitly disabled via build config 00:13:46.640 test-mldev: explicitly disabled via build config 00:13:46.640 test-pipeline: explicitly disabled via build config 00:13:46.640 test-pmd: explicitly disabled via build config 00:13:46.640 test-regex: explicitly disabled via build config 00:13:46.640 test-sad: explicitly disabled via build config 00:13:46.640 test-security-perf: explicitly disabled via build config 00:13:46.640 00:13:46.640 libs: 00:13:46.640 metrics: explicitly disabled via build config 00:13:46.640 acl: explicitly disabled via build config 00:13:46.640 bbdev: explicitly disabled via build config 00:13:46.640 bitratestats: explicitly disabled via build config 00:13:46.640 bpf: explicitly disabled via build config 00:13:46.640 cfgfile: explicitly disabled via build config 00:13:46.640 distributor: explicitly disabled via build config 00:13:46.640 efd: explicitly disabled via build config 00:13:46.640 eventdev: explicitly disabled via build config 00:13:46.640 dispatcher: explicitly disabled via build config 00:13:46.640 gpudev: explicitly disabled via build config 00:13:46.640 gro: explicitly disabled via build config 00:13:46.640 gso: explicitly disabled via build config 00:13:46.640 ip_frag: explicitly disabled via build config 00:13:46.640 jobstats: explicitly disabled via build config 00:13:46.640 latencystats: explicitly disabled via build config 00:13:46.640 lpm: explicitly disabled via build config 00:13:46.640 member: explicitly disabled via build config 00:13:46.640 pcapng: explicitly disabled via build config 00:13:46.640 power: only supported on Linux 00:13:46.640 rawdev: explicitly disabled via build config 00:13:46.640 regexdev: explicitly disabled via build config 00:13:46.640 mldev: explicitly disabled via build config 00:13:46.640 rib: explicitly disabled via build config 00:13:46.640 sched: explicitly disabled via build config 00:13:46.640 stack: explicitly disabled via build config 00:13:46.640 vhost: only supported on Linux 00:13:46.640 ipsec: explicitly disabled via build config 00:13:46.640 pdcp: explicitly disabled via build config 00:13:46.640 fib: explicitly disabled via build config 00:13:46.640 port: explicitly disabled via build config 00:13:46.640 pdump: explicitly disabled via build config 00:13:46.640 table: explicitly disabled via build config 00:13:46.640 pipeline: explicitly disabled via build config 00:13:46.640 graph: explicitly disabled via build config 00:13:46.640 node: explicitly disabled via build config 00:13:46.641 00:13:46.641 drivers: 00:13:46.641 common/cpt: not in enabled drivers build config 00:13:46.641 common/dpaax: not in enabled drivers build config 00:13:46.641 common/iavf: not in enabled drivers build config 00:13:46.641 common/idpf: not in enabled drivers build config 00:13:46.641 common/mvep: not in enabled drivers build config 00:13:46.641 common/octeontx: not in enabled drivers build config 00:13:46.641 bus/auxiliary: not in enabled drivers build config 00:13:46.641 bus/cdx: not in enabled drivers build config 00:13:46.641 bus/dpaa: not in enabled drivers build config 00:13:46.641 bus/fslmc: not in enabled drivers build config 00:13:46.641 bus/ifpga: not in enabled drivers build config 00:13:46.641 bus/platform: not in enabled drivers build config 00:13:46.641 bus/vmbus: not in enabled drivers build config 00:13:46.641 common/cnxk: not in enabled drivers build config 00:13:46.641 common/mlx5: not in enabled drivers build config 00:13:46.641 common/nfp: not in enabled drivers build config 00:13:46.641 common/qat: not in enabled drivers build config 00:13:46.641 common/sfc_efx: not in enabled drivers build config 00:13:46.641 mempool/bucket: not in enabled drivers build config 00:13:46.641 mempool/cnxk: not in enabled drivers build config 00:13:46.641 mempool/dpaa: not in enabled drivers build config 00:13:46.641 mempool/dpaa2: not in enabled drivers build config 00:13:46.641 mempool/octeontx: not in enabled drivers build config 00:13:46.641 mempool/stack: not in enabled drivers build config 00:13:46.641 dma/cnxk: not in enabled drivers build config 00:13:46.641 dma/dpaa: not in enabled drivers build config 00:13:46.641 dma/dpaa2: not in enabled drivers build config 00:13:46.641 dma/hisilicon: not in enabled drivers build config 00:13:46.641 dma/idxd: not in enabled drivers build config 00:13:46.641 dma/ioat: not in enabled drivers build config 00:13:46.641 dma/skeleton: not in enabled drivers build config 00:13:46.641 net/af_packet: not in enabled drivers build config 00:13:46.641 net/af_xdp: not in enabled drivers build config 00:13:46.641 net/ark: not in enabled drivers build config 00:13:46.641 net/atlantic: not in enabled drivers build config 00:13:46.641 net/avp: not in enabled drivers build config 00:13:46.641 net/axgbe: not in enabled drivers build config 00:13:46.641 net/bnx2x: not in enabled drivers build config 00:13:46.641 net/bnxt: not in enabled drivers build config 00:13:46.641 net/bonding: not in enabled drivers build config 00:13:46.641 net/cnxk: not in enabled drivers build config 00:13:46.641 net/cpfl: not in enabled drivers build config 00:13:46.641 net/cxgbe: not in enabled drivers build config 00:13:46.641 net/dpaa: not in enabled drivers build config 00:13:46.641 net/dpaa2: not in enabled drivers build config 00:13:46.641 net/e1000: not in enabled drivers build config 00:13:46.641 net/ena: not in enabled drivers build config 00:13:46.641 net/enetc: not in enabled drivers build config 00:13:46.641 net/enetfec: not in enabled drivers build config 00:13:46.641 net/enic: not in enabled drivers build config 00:13:46.641 net/failsafe: not in enabled drivers build config 00:13:46.641 net/fm10k: not in enabled drivers build config 00:13:46.641 net/gve: not in enabled drivers build config 00:13:46.641 net/hinic: not in enabled drivers build config 00:13:46.641 net/hns3: not in enabled drivers build config 00:13:46.641 net/i40e: not in enabled drivers build config 00:13:46.641 net/iavf: not in enabled drivers build config 00:13:46.641 net/ice: not in enabled drivers build config 00:13:46.641 net/idpf: not in enabled drivers build config 00:13:46.641 net/igc: not in enabled drivers build config 00:13:46.641 net/ionic: not in enabled drivers build config 00:13:46.641 net/ipn3ke: not in enabled drivers build config 00:13:46.641 net/ixgbe: not in enabled drivers build config 00:13:46.641 net/mana: not in enabled drivers build config 00:13:46.641 net/memif: not in enabled drivers build config 00:13:46.641 net/mlx4: not in enabled drivers build config 00:13:46.641 net/mlx5: not in enabled drivers build config 00:13:46.641 net/mvneta: not in enabled drivers build config 00:13:46.641 net/mvpp2: not in enabled drivers build config 00:13:46.641 net/netvsc: not in enabled drivers build config 00:13:46.641 net/nfb: not in enabled drivers build config 00:13:46.641 net/nfp: not in enabled drivers build config 00:13:46.641 net/ngbe: not in enabled drivers build config 00:13:46.641 net/null: not in enabled drivers build config 00:13:46.641 net/octeontx: not in enabled drivers build config 00:13:46.641 net/octeon_ep: not in enabled drivers build config 00:13:46.641 net/pcap: not in enabled drivers build config 00:13:46.641 net/pfe: not in enabled drivers build config 00:13:46.641 net/qede: not in enabled drivers build config 00:13:46.641 net/ring: not in enabled drivers build config 00:13:46.641 net/sfc: not in enabled drivers build config 00:13:46.641 net/softnic: not in enabled drivers build config 00:13:46.641 net/tap: not in enabled drivers build config 00:13:46.641 net/thunderx: not in enabled drivers build config 00:13:46.641 net/txgbe: not in enabled drivers build config 00:13:46.641 net/vdev_netvsc: not in enabled drivers build config 00:13:46.641 net/vhost: not in enabled drivers build config 00:13:46.641 net/virtio: not in enabled drivers build config 00:13:46.641 net/vmxnet3: not in enabled drivers build config 00:13:46.641 raw/*: missing internal dependency, "rawdev" 00:13:46.641 crypto/armv8: not in enabled drivers build config 00:13:46.641 crypto/bcmfs: not in enabled drivers build config 00:13:46.641 crypto/caam_jr: not in enabled drivers build config 00:13:46.641 crypto/ccp: not in enabled drivers build config 00:13:46.641 crypto/cnxk: not in enabled drivers build config 00:13:46.641 crypto/dpaa_sec: not in enabled drivers build config 00:13:46.641 crypto/dpaa2_sec: not in enabled drivers build config 00:13:46.641 crypto/ipsec_mb: not in enabled drivers build config 00:13:46.641 crypto/mlx5: not in enabled drivers build config 00:13:46.641 crypto/mvsam: not in enabled drivers build config 00:13:46.641 crypto/nitrox: not in enabled drivers build config 00:13:46.641 crypto/null: not in enabled drivers build config 00:13:46.641 crypto/octeontx: not in enabled drivers build config 00:13:46.641 crypto/openssl: not in enabled drivers build config 00:13:46.641 crypto/scheduler: not in enabled drivers build config 00:13:46.641 crypto/uadk: not in enabled drivers build config 00:13:46.641 crypto/virtio: not in enabled drivers build config 00:13:46.641 compress/isal: not in enabled drivers build config 00:13:46.641 compress/mlx5: not in enabled drivers build config 00:13:46.641 compress/octeontx: not in enabled drivers build config 00:13:46.641 compress/zlib: not in enabled drivers build config 00:13:46.641 regex/*: missing internal dependency, "regexdev" 00:13:46.641 ml/*: missing internal dependency, "mldev" 00:13:46.641 vdpa/*: missing internal dependency, "vhost" 00:13:46.641 event/*: missing internal dependency, "eventdev" 00:13:46.641 baseband/*: missing internal dependency, "bbdev" 00:13:46.641 gpu/*: missing internal dependency, "gpudev" 00:13:46.641 00:13:46.641 00:13:47.208 Build targets in project: 81 00:13:47.208 00:13:47.208 DPDK 23.11.0 00:13:47.208 00:13:47.208 User defined options 00:13:47.208 default_library : static 00:13:47.208 libdir : lib 00:13:47.208 prefix : / 00:13:47.208 c_args : -fPIC -Werror 00:13:47.208 c_link_args : 00:13:47.208 cpu_instruction_set: native 00:13:47.208 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:47.208 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:47.208 enable_docs : false 00:13:47.208 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:47.208 enable_kmods : true 00:13:47.208 tests : false 00:13:47.208 00:13:47.208 Found ninja-1.11.1 at /usr/local/bin/ninja 00:13:47.467 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:47.725 [1/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:13:47.725 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:47.725 [3/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:47.725 [4/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:47.725 [5/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:47.725 [6/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:47.725 [7/231] Linking static target lib/librte_log.a 00:13:47.725 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:47.725 [9/231] Linking static target lib/librte_kvargs.a 00:13:47.725 [10/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:47.982 [11/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:47.982 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:47.982 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:47.982 [14/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:47.983 [15/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:47.983 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:47.983 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:47.983 [18/231] Linking static target lib/librte_telemetry.a 00:13:48.241 [19/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:48.241 [20/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:48.241 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:48.241 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:48.241 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:48.241 [24/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:48.241 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:48.241 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:48.498 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:48.499 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:48.499 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:48.499 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:48.499 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:48.499 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:48.499 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:48.499 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:48.499 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:48.499 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:48.756 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:48.756 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:48.756 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:48.756 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:48.757 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:48.757 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:48.757 [43/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:49.014 [44/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:49.014 [45/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:49.014 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:49.014 [47/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:49.014 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:49.014 [49/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:49.014 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:49.014 [51/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:49.014 [52/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:49.014 [53/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:13:49.014 [54/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:13:49.014 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:49.014 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:49.271 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:49.271 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:49.271 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:49.271 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:49.271 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:13:49.271 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:13:49.271 [63/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:49.271 [64/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:13:49.271 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:49.271 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:13:49.271 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:13:49.529 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:13:49.529 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:13:49.529 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:13:49.529 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:13:49.529 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:49.529 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:49.529 [74/231] Linking static target lib/librte_eal.a 00:13:49.529 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:49.529 [76/231] Linking static target lib/librte_ring.a 00:13:49.787 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:49.787 [78/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:49.787 [79/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:49.787 [80/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:49.787 [81/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:49.787 [82/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:49.787 [83/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:49.787 [84/231] Linking target lib/librte_log.so.24.0 00:13:50.045 [85/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:50.045 [86/231] Linking static target lib/librte_mempool.a 00:13:50.045 [87/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:13:50.045 [88/231] Linking target lib/librte_kvargs.so.24.0 00:13:50.045 [89/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:50.045 [90/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:50.302 [91/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:50.302 [92/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:50.302 [93/231] Linking target lib/librte_telemetry.so.24.0 00:13:50.302 [94/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.302 [95/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:50.302 [96/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:50.302 [97/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:50.302 [98/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:13:50.302 [99/231] Linking static target lib/librte_mbuf.a 00:13:50.302 [100/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:50.302 [101/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:13:50.302 [102/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:50.302 [103/231] Linking static target lib/librte_rcu.a 00:13:50.559 [104/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:50.559 [105/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:50.559 [106/231] Linking static target lib/librte_net.a 00:13:50.559 [107/231] Linking static target lib/librte_meter.a 00:13:50.559 [108/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.559 [109/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:50.559 [110/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:50.559 [111/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:50.559 [112/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.817 [113/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:50.817 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:51.075 [115/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:51.075 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:51.075 [117/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:51.075 [118/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:51.075 [119/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:51.075 [120/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:51.075 [121/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:51.075 [122/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:51.332 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:51.333 [124/231] Linking static target lib/librte_pci.a 00:13:51.333 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:51.333 [126/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:51.333 [127/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:51.333 [128/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:51.333 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:51.333 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:51.333 [131/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:51.590 [132/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:51.590 [133/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:51.590 [134/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:51.590 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:51.590 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:51.590 [137/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:51.590 [138/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:51.590 [139/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:51.590 [140/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:51.590 [141/231] Linking static target lib/librte_cmdline.a 00:13:51.848 [142/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:51.848 [143/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:51.848 [144/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:51.848 [145/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:51.848 [146/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:52.107 [147/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:52.107 [148/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:52.107 [149/231] Linking static target lib/librte_compressdev.a 00:13:52.107 [150/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:52.107 [151/231] Linking static target lib/librte_timer.a 00:13:52.107 [152/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:52.107 [153/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:52.366 [154/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:52.366 [155/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:52.366 [156/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:52.366 [157/231] Linking static target lib/librte_ethdev.a 00:13:52.366 [158/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:52.366 [159/231] Linking static target lib/librte_dmadev.a 00:13:52.366 [160/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:52.366 [161/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:52.366 [162/231] Linking static target lib/librte_reorder.a 00:13:52.625 [163/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:52.625 [164/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:52.625 [165/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:52.625 [166/231] Linking static target lib/librte_security.a 00:13:52.625 [167/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:52.625 [168/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:52.625 [169/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:52.625 [170/231] Linking static target lib/librte_hash.a 00:13:52.884 [171/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:52.884 [172/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:52.884 [173/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:13:52.884 [174/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:52.884 [175/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:52.884 [176/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.143 [177/231] Generating kernel/freebsd/contigmem with a custom command 00:13:53.143 machine -> /usr/src/sys/amd64/include 00:13:53.143 x86 -> /usr/src/sys/x86/include 00:13:53.143 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:13:53.143 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:13:53.143 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:13:53.143 touch opt_global.h 00:13:53.144 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:13:53.144 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:13:53.144 :> export_syms 00:13:53.144 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:13:53.144 objcopy --strip-debug contigmem.ko 00:13:53.144 [178/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:53.144 [179/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:53.144 [180/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:53.144 [181/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:53.144 [182/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:53.144 [183/231] Linking static target drivers/librte_bus_pci.a 00:13:53.144 [184/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.144 [185/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:53.144 [186/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:53.144 [187/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:53.144 [188/231] Linking static target drivers/librte_bus_vdev.a 00:13:53.402 [189/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:53.403 [190/231] Generating kernel/freebsd/nic_uio with a custom command 00:13:53.403 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:13:53.403 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:13:53.403 :> export_syms 00:13:53.403 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:13:53.403 objcopy --strip-debug nic_uio.ko 00:13:53.403 [191/231] Linking static target lib/librte_cryptodev.a 00:13:53.403 [192/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.403 [193/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:53.661 [194/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:53.661 [195/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:53.920 [196/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:53.920 [197/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:53.920 [198/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:53.920 [199/231] Linking static target drivers/librte_mempool_ring.a 00:13:54.212 [200/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:58.438 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.340 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:14:00.341 [203/231] Linking target lib/librte_eal.so.24.0 00:14:00.600 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:14:00.600 [205/231] Linking target lib/librte_meter.so.24.0 00:14:00.600 [206/231] Linking target lib/librte_timer.so.24.0 00:14:00.600 [207/231] Linking target lib/librte_dmadev.so.24.0 00:14:00.600 [208/231] Linking target lib/librte_ring.so.24.0 00:14:00.600 [209/231] Linking target lib/librte_pci.so.24.0 00:14:00.600 [210/231] Linking target drivers/librte_bus_vdev.so.24.0 00:14:00.858 [211/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:14:00.858 [212/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:14:00.858 [213/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:14:00.858 [214/231] Linking target lib/librte_mempool.so.24.0 00:14:00.858 [215/231] Linking target lib/librte_rcu.so.24.0 00:14:00.858 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:14:00.858 [217/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:14:00.858 [218/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:14:00.858 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:14:00.858 [220/231] Linking target lib/librte_mbuf.so.24.0 00:14:01.116 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:14:01.116 [222/231] Linking target lib/librte_reorder.so.24.0 00:14:01.116 [223/231] Linking target lib/librte_compressdev.so.24.0 00:14:01.116 [224/231] Linking target lib/librte_net.so.24.0 00:14:01.116 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:14:01.375 [226/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:14:01.375 [227/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:14:01.375 [228/231] Linking target lib/librte_security.so.24.0 00:14:01.375 [229/231] Linking target lib/librte_cmdline.so.24.0 00:14:01.375 [230/231] Linking target lib/librte_hash.so.24.0 00:14:01.375 [231/231] Linking target lib/librte_ethdev.so.24.0 00:14:01.375 INFO: autodetecting backend as ninja 00:14:01.375 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:02.311 CC lib/ut_mock/mock.o 00:14:02.311 CC lib/ut/ut.o 00:14:02.311 CC lib/log/log.o 00:14:02.311 CC lib/log/log_flags.o 00:14:02.311 CC lib/log/log_deprecated.o 00:14:02.311 LIB libspdk_ut_mock.a 00:14:02.311 LIB libspdk_ut.a 00:14:02.311 LIB libspdk_log.a 00:14:02.569 CC lib/ioat/ioat.o 00:14:02.569 CXX lib/trace_parser/trace.o 00:14:02.569 CC lib/util/base64.o 00:14:02.569 CC lib/util/bit_array.o 00:14:02.569 CC lib/util/cpuset.o 00:14:02.569 CC lib/dma/dma.o 00:14:02.569 CC lib/util/crc16.o 00:14:02.569 CC lib/util/crc32.o 00:14:02.569 CC lib/util/crc32_ieee.o 00:14:02.569 CC lib/util/crc32c.o 00:14:02.569 CC lib/util/crc64.o 00:14:02.569 CC lib/util/dif.o 00:14:02.569 CC lib/util/fd.o 00:14:02.569 CC lib/util/file.o 00:14:02.569 CC lib/util/hexlify.o 00:14:02.569 LIB libspdk_dma.a 00:14:02.569 CC lib/util/iov.o 00:14:02.569 CC lib/util/math.o 00:14:02.569 CC lib/util/pipe.o 00:14:02.569 CC lib/util/strerror_tls.o 00:14:02.827 CC lib/util/string.o 00:14:02.827 CC lib/util/uuid.o 00:14:02.827 CC lib/util/fd_group.o 00:14:02.827 CC lib/util/xor.o 00:14:02.827 LIB libspdk_ioat.a 00:14:02.827 CC lib/util/zipf.o 00:14:03.394 LIB libspdk_trace_parser.a 00:14:03.652 LIB libspdk_util.a 00:14:03.652 CC lib/vmd/led.o 00:14:03.652 CC lib/vmd/vmd.o 00:14:03.652 CC lib/rdma/common.o 00:14:03.652 CC lib/env_dpdk/env.o 00:14:03.652 CC lib/rdma/rdma_verbs.o 00:14:03.652 CC lib/env_dpdk/memory.o 00:14:03.652 CC lib/conf/conf.o 00:14:03.652 CC lib/idxd/idxd.o 00:14:03.652 CC lib/idxd/idxd_user.o 00:14:03.652 CC lib/json/json_parse.o 00:14:03.652 CC lib/env_dpdk/pci.o 00:14:03.652 CC lib/json/json_util.o 00:14:03.909 LIB libspdk_rdma.a 00:14:03.909 CC lib/env_dpdk/init.o 00:14:03.909 CC lib/env_dpdk/threads.o 00:14:03.909 LIB libspdk_conf.a 00:14:03.909 CC lib/env_dpdk/pci_ioat.o 00:14:03.909 CC lib/json/json_write.o 00:14:03.909 CC lib/env_dpdk/pci_virtio.o 00:14:03.909 CC lib/env_dpdk/pci_vmd.o 00:14:03.909 CC lib/env_dpdk/pci_idxd.o 00:14:03.909 CC lib/env_dpdk/pci_event.o 00:14:03.909 CC lib/env_dpdk/sigbus_handler.o 00:14:03.909 CC lib/env_dpdk/pci_dpdk.o 00:14:03.909 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:03.909 CC lib/env_dpdk/pci_dpdk_2211.o 00:14:04.166 LIB libspdk_idxd.a 00:14:04.166 LIB libspdk_vmd.a 00:14:04.423 LIB libspdk_json.a 00:14:04.423 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:14:04.424 CC lib/jsonrpc/jsonrpc_server.o 00:14:04.424 CC lib/jsonrpc/jsonrpc_client.o 00:14:04.424 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:14:04.681 LIB libspdk_jsonrpc.a 00:14:04.681 CC lib/rpc/rpc.o 00:14:04.939 LIB libspdk_env_dpdk.a 00:14:04.939 LIB libspdk_rpc.a 00:14:04.939 CC lib/trace/trace.o 00:14:04.939 CC lib/trace/trace_rpc.o 00:14:04.939 CC lib/trace/trace_flags.o 00:14:04.939 CC lib/sock/sock.o 00:14:04.939 CC lib/notify/notify.o 00:14:04.939 CC lib/sock/sock_rpc.o 00:14:04.939 CC lib/notify/notify_rpc.o 00:14:05.196 LIB libspdk_notify.a 00:14:05.196 LIB libspdk_trace.a 00:14:05.196 LIB libspdk_sock.a 00:14:05.454 CC lib/thread/iobuf.o 00:14:05.454 CC lib/thread/thread.o 00:14:05.454 CC lib/nvme/nvme_ctrlr.o 00:14:05.454 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:05.454 CC lib/nvme/nvme_fabric.o 00:14:05.454 CC lib/nvme/nvme_ns.o 00:14:05.454 CC lib/nvme/nvme_ns_cmd.o 00:14:05.454 CC lib/nvme/nvme_pcie_common.o 00:14:05.454 CC lib/nvme/nvme_pcie.o 00:14:05.454 CC lib/nvme/nvme_qpair.o 00:14:05.454 CC lib/nvme/nvme.o 00:14:05.711 CC lib/nvme/nvme_quirks.o 00:14:05.968 CC lib/nvme/nvme_transport.o 00:14:05.968 CC lib/nvme/nvme_discovery.o 00:14:05.968 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:05.968 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:05.969 CC lib/nvme/nvme_tcp.o 00:14:06.226 CC lib/nvme/nvme_opal.o 00:14:06.226 CC lib/nvme/nvme_io_msg.o 00:14:06.226 CC lib/nvme/nvme_poll_group.o 00:14:06.226 LIB libspdk_thread.a 00:14:06.226 CC lib/nvme/nvme_zns.o 00:14:06.226 CC lib/accel/accel.o 00:14:06.483 CC lib/blob/blobstore.o 00:14:06.483 CC lib/init/json_config.o 00:14:06.483 CC lib/init/subsystem.o 00:14:06.483 CC lib/blob/request.o 00:14:06.483 CC lib/nvme/nvme_cuse.o 00:14:06.483 CC lib/init/subsystem_rpc.o 00:14:06.483 CC lib/accel/accel_rpc.o 00:14:06.483 CC lib/blob/zeroes.o 00:14:06.483 CC lib/accel/accel_sw.o 00:14:06.741 CC lib/init/rpc.o 00:14:06.741 CC lib/nvme/nvme_rdma.o 00:14:06.741 CC lib/blob/blob_bs_dev.o 00:14:06.741 LIB libspdk_init.a 00:14:06.741 CC lib/event/app.o 00:14:06.741 CC lib/event/reactor.o 00:14:06.741 CC lib/event/log_rpc.o 00:14:06.741 CC lib/event/app_rpc.o 00:14:06.741 CC lib/event/scheduler_static.o 00:14:06.999 LIB libspdk_accel.a 00:14:06.999 LIB libspdk_event.a 00:14:07.257 CC lib/bdev/bdev_rpc.o 00:14:07.257 CC lib/bdev/bdev.o 00:14:07.257 CC lib/bdev/bdev_zone.o 00:14:07.257 CC lib/bdev/scsi_nvme.o 00:14:07.257 CC lib/bdev/part.o 00:14:07.516 LIB libspdk_nvme.a 00:14:08.450 LIB libspdk_blob.a 00:14:08.450 CC lib/lvol/lvol.o 00:14:08.450 CC lib/blobfs/blobfs.o 00:14:08.450 CC lib/blobfs/tree.o 00:14:09.016 LIB libspdk_bdev.a 00:14:09.016 LIB libspdk_lvol.a 00:14:09.016 CC lib/nvmf/ctrlr.o 00:14:09.016 CC lib/nvmf/ctrlr_discovery.o 00:14:09.016 CC lib/nvmf/ctrlr_bdev.o 00:14:09.016 CC lib/nvmf/subsystem.o 00:14:09.016 CC lib/nvmf/nvmf.o 00:14:09.016 CC lib/nvmf/nvmf_rpc.o 00:14:09.016 CC lib/scsi/dev.o 00:14:09.016 CC lib/nvmf/transport.o 00:14:09.016 CC lib/scsi/lun.o 00:14:09.016 LIB libspdk_blobfs.a 00:14:09.016 CC lib/scsi/port.o 00:14:09.276 CC lib/nvmf/tcp.o 00:14:09.276 CC lib/scsi/scsi.o 00:14:09.276 CC lib/nvmf/rdma.o 00:14:09.276 CC lib/scsi/scsi_bdev.o 00:14:09.276 CC lib/scsi/scsi_pr.o 00:14:09.276 CC lib/scsi/scsi_rpc.o 00:14:09.536 CC lib/scsi/task.o 00:14:09.536 LIB libspdk_scsi.a 00:14:09.794 CC lib/iscsi/conn.o 00:14:09.794 CC lib/iscsi/iscsi.o 00:14:09.794 CC lib/iscsi/init_grp.o 00:14:09.794 CC lib/iscsi/tgt_node.o 00:14:09.794 CC lib/iscsi/md5.o 00:14:09.794 CC lib/iscsi/param.o 00:14:09.794 CC lib/iscsi/portal_grp.o 00:14:09.794 CC lib/iscsi/iscsi_subsystem.o 00:14:09.794 CC lib/iscsi/iscsi_rpc.o 00:14:10.052 CC lib/iscsi/task.o 00:14:10.616 LIB libspdk_nvmf.a 00:14:11.183 LIB libspdk_iscsi.a 00:14:11.183 CC module/env_dpdk/env_dpdk_rpc.o 00:14:11.183 CC module/blob/bdev/blob_bdev.o 00:14:11.183 CC module/accel/ioat/accel_ioat.o 00:14:11.183 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:11.183 CC module/accel/ioat/accel_ioat_rpc.o 00:14:11.183 CC module/accel/error/accel_error.o 00:14:11.183 CC module/accel/error/accel_error_rpc.o 00:14:11.183 CC module/accel/dsa/accel_dsa.o 00:14:11.183 CC module/accel/iaa/accel_iaa.o 00:14:11.183 CC module/sock/posix/posix.o 00:14:11.183 LIB libspdk_env_dpdk_rpc.a 00:14:11.183 CC module/accel/iaa/accel_iaa_rpc.o 00:14:11.183 CC module/accel/dsa/accel_dsa_rpc.o 00:14:11.441 LIB libspdk_accel_ioat.a 00:14:11.441 LIB libspdk_accel_error.a 00:14:11.441 LIB libspdk_scheduler_dynamic.a 00:14:11.441 LIB libspdk_accel_iaa.a 00:14:11.441 LIB libspdk_accel_dsa.a 00:14:11.441 LIB libspdk_blob_bdev.a 00:14:11.441 CC module/bdev/delay/vbdev_delay.o 00:14:11.441 CC module/bdev/gpt/gpt.o 00:14:11.441 CC module/bdev/null/bdev_null.o 00:14:11.441 CC module/bdev/nvme/bdev_nvme.o 00:14:11.441 CC module/bdev/lvol/vbdev_lvol.o 00:14:11.441 CC module/blobfs/bdev/blobfs_bdev.o 00:14:11.441 CC module/bdev/error/vbdev_error.o 00:14:11.441 CC module/bdev/malloc/bdev_malloc.o 00:14:11.441 CC module/bdev/passthru/vbdev_passthru.o 00:14:11.699 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:11.699 CC module/bdev/gpt/vbdev_gpt.o 00:14:11.699 CC module/bdev/null/bdev_null_rpc.o 00:14:11.699 CC module/bdev/error/vbdev_error_rpc.o 00:14:11.699 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:11.699 LIB libspdk_blobfs_bdev.a 00:14:11.699 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:11.699 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:11.699 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:11.699 LIB libspdk_bdev_null.a 00:14:11.699 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:11.699 LIB libspdk_bdev_error.a 00:14:11.699 LIB libspdk_bdev_passthru.a 00:14:11.699 LIB libspdk_sock_posix.a 00:14:11.699 CC module/bdev/nvme/nvme_rpc.o 00:14:11.699 CC module/bdev/nvme/bdev_mdns_client.o 00:14:11.699 LIB libspdk_bdev_delay.a 00:14:11.957 LIB libspdk_bdev_malloc.a 00:14:11.957 LIB libspdk_bdev_gpt.a 00:14:11.957 CC module/bdev/split/vbdev_split.o 00:14:11.957 CC module/bdev/raid/bdev_raid.o 00:14:11.957 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:11.957 CC module/bdev/raid/bdev_raid_rpc.o 00:14:11.957 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:11.957 CC module/bdev/aio/bdev_aio.o 00:14:11.957 LIB libspdk_bdev_lvol.a 00:14:11.957 CC module/bdev/raid/bdev_raid_sb.o 00:14:11.957 CC module/bdev/aio/bdev_aio_rpc.o 00:14:11.957 CC module/bdev/split/vbdev_split_rpc.o 00:14:11.957 CC module/bdev/raid/raid0.o 00:14:11.957 CC module/bdev/raid/raid1.o 00:14:11.957 CC module/bdev/raid/concat.o 00:14:12.216 LIB libspdk_bdev_split.a 00:14:12.216 LIB libspdk_bdev_aio.a 00:14:12.216 LIB libspdk_bdev_zone_block.a 00:14:12.474 LIB libspdk_bdev_raid.a 00:14:13.042 LIB libspdk_bdev_nvme.a 00:14:13.300 CC module/event/subsystems/sock/sock.o 00:14:13.300 CC module/event/subsystems/iobuf/iobuf.o 00:14:13.300 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:13.300 CC module/event/subsystems/scheduler/scheduler.o 00:14:13.300 CC module/event/subsystems/vmd/vmd.o 00:14:13.300 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:13.300 LIB libspdk_event_sock.a 00:14:13.558 LIB libspdk_event_scheduler.a 00:14:13.558 LIB libspdk_event_vmd.a 00:14:13.558 LIB libspdk_event_iobuf.a 00:14:13.558 CC module/event/subsystems/accel/accel.o 00:14:13.817 LIB libspdk_event_accel.a 00:14:13.817 CC module/event/subsystems/bdev/bdev.o 00:14:14.075 LIB libspdk_event_bdev.a 00:14:14.075 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:14.075 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:14.075 CC module/event/subsystems/scsi/scsi.o 00:14:14.335 LIB libspdk_event_scsi.a 00:14:14.335 LIB libspdk_event_nvmf.a 00:14:14.335 CC module/event/subsystems/iscsi/iscsi.o 00:14:14.593 LIB libspdk_event_iscsi.a 00:14:14.593 CC app/trace_record/trace_record.o 00:14:14.593 CXX app/trace/trace.o 00:14:14.593 TEST_HEADER include/spdk/config.h 00:14:14.593 CXX test/cpp_headers/accel.o 00:14:14.593 CC app/nvmf_tgt/nvmf_main.o 00:14:14.593 CC examples/accel/perf/accel_perf.o 00:14:14.593 CC test/app/bdev_svc/bdev_svc.o 00:14:14.593 CC test/blobfs/mkfs/mkfs.o 00:14:14.593 CC test/dma/test_dma/test_dma.o 00:14:14.593 CC test/bdev/bdevio/bdevio.o 00:14:14.593 CC test/accel/dif/dif.o 00:14:14.852 LINK nvmf_tgt 00:14:14.852 CXX test/cpp_headers/accel_module.o 00:14:14.852 LINK bdev_svc 00:14:14.852 LINK mkfs 00:14:14.852 LINK spdk_trace_record 00:14:14.852 CXX test/cpp_headers/assert.o 00:14:15.110 LINK dif 00:14:15.110 CC test/app/histogram_perf/histogram_perf.o 00:14:15.110 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:15.110 LINK test_dma 00:14:15.110 LINK accel_perf 00:14:15.110 CC examples/bdev/hello_world/hello_bdev.o 00:14:15.110 CC test/env/mem_callbacks/mem_callbacks.o 00:14:15.110 CXX test/cpp_headers/barrier.o 00:14:15.110 LINK bdevio 00:14:15.110 LINK histogram_perf 00:14:15.110 CC test/env/vtophys/vtophys.o 00:14:15.110 CC app/iscsi_tgt/iscsi_tgt.o 00:14:15.368 CXX test/cpp_headers/base64.o 00:14:15.368 LINK hello_bdev 00:14:15.368 LINK spdk_trace 00:14:15.368 CC examples/bdev/bdevperf/bdevperf.o 00:14:15.368 CC test/event/event_perf/event_perf.o 00:14:15.368 CC examples/blob/hello_world/hello_blob.o 00:14:15.368 LINK vtophys 00:14:15.368 CXX test/cpp_headers/bdev.o 00:14:15.368 CXX test/cpp_headers/bdev_module.o 00:14:15.368 LINK iscsi_tgt 00:14:15.368 LINK event_perf 00:14:15.368 CC test/event/reactor/reactor.o 00:14:15.368 LINK nvme_fuzz 00:14:15.368 CC examples/ioat/perf/perf.o 00:14:15.368 LINK hello_blob 00:14:15.368 LINK reactor 00:14:15.626 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:15.626 LINK mem_callbacks 00:14:15.626 CC test/event/reactor_perf/reactor_perf.o 00:14:15.626 CXX test/cpp_headers/bdev_zone.o 00:14:15.626 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:15.626 CC app/spdk_tgt/spdk_tgt.o 00:14:15.626 CC examples/ioat/verify/verify.o 00:14:15.626 LINK ioat_perf 00:14:15.626 LINK env_dpdk_post_init 00:14:15.626 CC examples/blob/cli/blobcli.o 00:14:15.626 LINK reactor_perf 00:14:15.626 gmake[2]: Nothing to be done for 'all'. 00:14:15.626 LINK spdk_tgt 00:14:15.626 CC app/spdk_lspci/spdk_lspci.o 00:14:15.626 CXX test/cpp_headers/bit_array.o 00:14:15.626 CC test/env/memory/memory_ut.o 00:14:15.885 CC test/env/pci/pci_ut.o 00:14:15.885 LINK spdk_lspci 00:14:15.885 CC examples/nvme/hello_world/hello_world.o 00:14:15.885 LINK verify 00:14:15.885 CXX test/cpp_headers/bit_pool.o 00:14:15.885 CXX test/cpp_headers/blob.o 00:14:15.885 CC app/spdk_nvme_perf/perf.o 00:14:15.885 CC test/nvme/aer/aer.o 00:14:15.885 LINK hello_world 00:14:16.143 CXX test/cpp_headers/blob_bdev.o 00:14:16.143 LINK bdevperf 00:14:16.143 CC app/spdk_nvme_identify/identify.o 00:14:16.143 LINK blobcli 00:14:16.143 CC examples/nvme/reconnect/reconnect.o 00:14:16.143 LINK pci_ut 00:14:16.143 LINK aer 00:14:16.143 CC app/spdk_nvme_discover/discovery_aer.o 00:14:16.143 CXX test/cpp_headers/blobfs.o 00:14:16.143 CC test/app/jsoncat/jsoncat.o 00:14:16.143 CC test/rpc_client/rpc_client_test.o 00:14:16.402 LINK jsoncat 00:14:16.402 CC test/nvme/reset/reset.o 00:14:16.402 CXX test/cpp_headers/blobfs_bdev.o 00:14:16.402 LINK reconnect 00:14:16.402 LINK spdk_nvme_discover 00:14:16.402 LINK rpc_client_test 00:14:16.402 CC test/thread/poller_perf/poller_perf.o 00:14:16.402 CC test/thread/lock/spdk_lock.o 00:14:16.402 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:16.402 CC test/nvme/sgl/sgl.o 00:14:16.402 CXX test/cpp_headers/conf.o 00:14:16.402 LINK reset 00:14:16.402 LINK poller_perf 00:14:16.661 CC examples/nvme/arbitration/arbitration.o 00:14:16.661 CC test/nvme/e2edp/nvme_dp.o 00:14:16.661 CXX test/cpp_headers/config.o 00:14:16.661 LINK memory_ut 00:14:16.661 LINK spdk_nvme_perf 00:14:16.661 CXX test/cpp_headers/cpuset.o 00:14:16.661 LINK sgl 00:14:16.661 CC test/nvme/overhead/overhead.o 00:14:16.661 CC examples/nvme/hotplug/hotplug.o 00:14:16.661 CXX test/cpp_headers/crc16.o 00:14:16.919 CC app/spdk_top/spdk_top.o 00:14:16.919 LINK nvme_dp 00:14:16.919 CXX test/cpp_headers/crc32.o 00:14:16.919 LINK arbitration 00:14:16.919 LINK overhead 00:14:16.919 LINK hotplug 00:14:16.919 CXX test/cpp_headers/crc64.o 00:14:17.189 LINK spdk_nvme_identify 00:14:17.189 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:14:17.189 CC test/nvme/err_injection/err_injection.o 00:14:17.189 LINK nvme_manage 00:14:17.189 CC app/fio/nvme/fio_plugin.o 00:14:17.189 CC examples/sock/hello_world/hello_sock.o 00:14:17.189 CXX test/cpp_headers/dif.o 00:14:17.189 LINK iscsi_fuzz 00:14:17.189 LINK err_injection 00:14:17.189 CC examples/vmd/lsvmd/lsvmd.o 00:14:17.189 LINK histogram_ut 00:14:17.189 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:17.189 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:14:17.189 struct spdk_nvme_fdp_ruhs ruhs; 00:14:17.189 ^ 00:14:17.189 CXX test/cpp_headers/dma.o 00:14:17.189 LINK lsvmd 00:14:17.189 LINK hello_sock 00:14:17.447 CC test/app/stub/stub.o 00:14:17.447 CC test/nvme/startup/startup.o 00:14:17.447 LINK cmb_copy 00:14:17.447 LINK spdk_lock 00:14:17.447 CXX test/cpp_headers/endian.o 00:14:17.447 CC examples/nvme/abort/abort.o 00:14:17.447 CC examples/vmd/led/led.o 00:14:17.447 CC test/unit/lib/accel/accel.c/accel_ut.o 00:14:17.447 LINK startup 00:14:17.447 LINK stub 00:14:17.447 LINK led 00:14:17.447 CC test/nvme/reserve/reserve.o 00:14:17.447 CXX test/cpp_headers/env.o 00:14:17.447 CC examples/nvmf/nvmf/nvmf.o 00:14:17.447 1 warning generated. 00:14:17.447 LINK spdk_nvme 00:14:17.706 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:17.706 CC test/nvme/simple_copy/simple_copy.o 00:14:17.706 CC examples/util/zipf/zipf.o 00:14:17.706 LINK reserve 00:14:17.706 CXX test/cpp_headers/env_dpdk.o 00:14:17.706 LINK abort 00:14:17.706 CC app/fio/bdev/fio_plugin.o 00:14:17.706 LINK pmr_persistence 00:14:17.706 LINK zipf 00:14:17.706 CXX test/cpp_headers/event.o 00:14:17.706 LINK simple_copy 00:14:17.706 LINK nvmf 00:14:17.706 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:14:17.706 CC test/nvme/connect_stress/connect_stress.o 00:14:17.965 CC test/nvme/boot_partition/boot_partition.o 00:14:17.965 LINK spdk_top 00:14:17.965 CXX test/cpp_headers/fd.o 00:14:17.965 CC examples/thread/thread/thread_ex.o 00:14:17.965 CXX test/cpp_headers/fd_group.o 00:14:17.965 LINK boot_partition 00:14:17.965 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:14:17.965 CC test/nvme/compliance/nvme_compliance.o 00:14:17.965 LINK connect_stress 00:14:17.965 CC test/unit/lib/blob/blob.c/blob_ut.o 00:14:17.965 CXX test/cpp_headers/file.o 00:14:17.965 LINK spdk_bdev 00:14:17.965 LINK thread 00:14:17.965 CC examples/idxd/perf/perf.o 00:14:18.224 CC test/nvme/fused_ordering/fused_ordering.o 00:14:18.224 CXX test/cpp_headers/ftl.o 00:14:18.224 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:18.224 CC test/unit/lib/bdev/part.c/part_ut.o 00:14:18.224 LINK fused_ordering 00:14:18.224 LINK idxd_perf 00:14:18.224 CXX test/cpp_headers/gpt_spec.o 00:14:18.483 LINK doorbell_aers 00:14:18.483 CXX test/cpp_headers/hexlify.o 00:14:18.483 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:14:18.483 LINK blob_bdev_ut 00:14:18.483 LINK nvme_compliance 00:14:18.483 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:14:18.483 CXX test/cpp_headers/histogram_data.o 00:14:18.483 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:14:18.483 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:14:18.483 CC test/nvme/fdp/fdp.o 00:14:18.483 LINK tree_ut 00:14:18.483 CXX test/cpp_headers/idxd.o 00:14:18.742 LINK scsi_nvme_ut 00:14:18.742 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:14:18.742 CXX test/cpp_headers/idxd_spec.o 00:14:18.742 LINK fdp 00:14:18.742 CC test/unit/lib/dma/dma.c/dma_ut.o 00:14:18.742 LINK gpt_ut 00:14:18.742 CXX test/cpp_headers/init.o 00:14:19.002 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:14:19.002 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:14:19.002 CXX test/cpp_headers/ioat.o 00:14:19.002 CXX test/cpp_headers/ioat_spec.o 00:14:19.261 LINK dma_ut 00:14:19.261 CXX test/cpp_headers/iscsi_spec.o 00:14:19.261 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:14:19.261 CXX test/cpp_headers/json.o 00:14:19.520 CXX test/cpp_headers/jsonrpc.o 00:14:19.520 CXX test/cpp_headers/likely.o 00:14:19.779 CXX test/cpp_headers/log.o 00:14:19.779 LINK accel_ut 00:14:19.779 CXX test/cpp_headers/lvol.o 00:14:19.779 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:14:20.038 LINK vbdev_lvol_ut 00:14:20.038 LINK blobfs_async_ut 00:14:20.038 CXX test/cpp_headers/memory.o 00:14:20.038 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:14:20.038 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:14:20.038 LINK blobfs_sync_ut 00:14:20.038 LINK bdev_zone_ut 00:14:20.038 CXX test/cpp_headers/mmio.o 00:14:20.038 CC test/unit/lib/event/app.c/app_ut.o 00:14:20.038 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:14:20.297 LINK blobfs_bdev_ut 00:14:20.297 CXX test/cpp_headers/nbd.o 00:14:20.297 CXX test/cpp_headers/notify.o 00:14:20.297 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:14:20.297 LINK bdev_raid_sb_ut 00:14:20.297 CXX test/cpp_headers/nvme.o 00:14:20.557 LINK ioat_ut 00:14:20.557 CXX test/cpp_headers/nvme_intel.o 00:14:20.557 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:14:20.557 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:14:20.557 LINK app_ut 00:14:20.557 CXX test/cpp_headers/nvme_ocssd.o 00:14:20.557 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:14:20.816 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:20.816 CXX test/cpp_headers/nvme_spec.o 00:14:20.816 LINK part_ut 00:14:21.076 CXX test/cpp_headers/nvme_zns.o 00:14:21.076 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:14:21.076 LINK vbdev_zone_block_ut 00:14:21.076 LINK concat_ut 00:14:21.076 CXX test/cpp_headers/nvmf.o 00:14:21.076 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:14:21.076 LINK bdev_raid_ut 00:14:21.076 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:14:21.335 CXX test/cpp_headers/nvmf_cmd.o 00:14:21.335 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:14:21.335 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:21.335 LINK reactor_ut 00:14:21.593 LINK raid1_ut 00:14:21.593 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:14:21.593 LINK jsonrpc_server_ut 00:14:21.593 CXX test/cpp_headers/nvmf_spec.o 00:14:21.593 CXX test/cpp_headers/nvmf_transport.o 00:14:21.593 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:14:21.593 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:14:21.853 CXX test/cpp_headers/opal.o 00:14:21.853 LINK conn_ut 00:14:21.853 CXX test/cpp_headers/opal_spec.o 00:14:22.111 LINK init_grp_ut 00:14:22.111 CXX test/cpp_headers/pci_ids.o 00:14:22.111 CC test/unit/lib/log/log.c/log_ut.o 00:14:22.111 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:14:22.111 CXX test/cpp_headers/pipe.o 00:14:22.370 CXX test/cpp_headers/queue.o 00:14:22.370 LINK log_ut 00:14:22.370 CXX test/cpp_headers/reduce.o 00:14:22.370 CC test/unit/lib/iscsi/param.c/param_ut.o 00:14:22.370 CXX test/cpp_headers/rpc.o 00:14:22.370 LINK bdev_ut 00:14:22.370 LINK json_util_ut 00:14:22.629 CXX test/cpp_headers/scheduler.o 00:14:22.629 LINK json_parse_ut 00:14:22.629 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:14:22.629 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:14:22.629 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:14:22.629 CXX test/cpp_headers/scsi.o 00:14:22.887 CXX test/cpp_headers/scsi_spec.o 00:14:22.887 CXX test/cpp_headers/sock.o 00:14:22.887 LINK param_ut 00:14:23.145 CXX test/cpp_headers/stdinc.o 00:14:23.145 LINK portal_grp_ut 00:14:23.145 CC test/unit/lib/notify/notify.c/notify_ut.o 00:14:23.145 CXX test/cpp_headers/string.o 00:14:23.145 CXX test/cpp_headers/thread.o 00:14:23.145 LINK bdev_ut 00:14:23.403 CXX test/cpp_headers/trace.o 00:14:23.403 LINK tgt_node_ut 00:14:23.403 LINK notify_ut 00:14:23.403 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:14:23.403 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:14:23.403 CXX test/cpp_headers/trace_parser.o 00:14:23.403 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:14:23.403 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:14:23.403 CXX test/cpp_headers/tree.o 00:14:23.661 CXX test/cpp_headers/ublk.o 00:14:23.661 CXX test/cpp_headers/util.o 00:14:23.661 LINK json_write_ut 00:14:23.661 CXX test/cpp_headers/uuid.o 00:14:23.920 LINK iscsi_ut 00:14:23.920 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:14:23.920 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:14:23.920 CXX test/cpp_headers/version.o 00:14:23.920 CXX test/cpp_headers/vfio_user_pci.o 00:14:23.920 CXX test/cpp_headers/vfio_user_spec.o 00:14:24.179 CXX test/cpp_headers/vhost.o 00:14:24.179 CXX test/cpp_headers/vmd.o 00:14:24.438 CXX test/cpp_headers/xor.o 00:14:24.438 LINK nvme_ut 00:14:24.438 LINK nvme_ctrlr_ocssd_cmd_ut 00:14:24.438 CXX test/cpp_headers/zipf.o 00:14:24.438 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:14:24.438 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:14:24.697 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:14:24.697 LINK lvol_ut 00:14:24.697 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:14:24.956 LINK nvme_ctrlr_cmd_ut 00:14:24.956 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:14:24.956 LINK dev_ut 00:14:25.215 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:14:25.215 LINK ctrlr_ut 00:14:25.215 LINK nvme_ns_ut 00:14:25.475 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:14:25.475 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:14:25.475 LINK scsi_ut 00:14:25.475 LINK blob_ut 00:14:25.475 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:14:25.475 LINK ctrlr_bdev_ut 00:14:25.764 LINK bdev_nvme_ut 00:14:25.764 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:14:25.764 LINK tcp_ut 00:14:25.764 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:14:25.764 CC test/unit/lib/sock/sock.c/sock_ut.o 00:14:25.764 LINK lun_ut 00:14:25.764 CC test/unit/lib/thread/thread.c/thread_ut.o 00:14:25.764 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:14:26.058 LINK nvme_ctrlr_ut 00:14:26.058 LINK ctrlr_discovery_ut 00:14:26.058 CC test/unit/lib/util/base64.c/base64_ut.o 00:14:26.316 CC test/unit/lib/sock/posix.c/posix_ut.o 00:14:26.317 LINK subsystem_ut 00:14:26.317 LINK nvmf_ut 00:14:26.317 LINK base64_ut 00:14:26.317 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:14:26.317 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:14:26.575 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:14:26.575 LINK posix_ut 00:14:26.834 LINK scsi_bdev_ut 00:14:26.834 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:14:26.834 LINK sock_ut 00:14:26.835 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:14:27.092 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:14:27.092 LINK nvme_ns_ocssd_cmd_ut 00:14:27.092 LINK iobuf_ut 00:14:27.092 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:14:27.092 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:14:27.092 LINK nvme_pcie_ut 00:14:27.092 LINK bit_array_ut 00:14:27.350 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:14:27.350 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:14:27.350 LINK thread_ut 00:14:27.350 LINK crc16_ut 00:14:27.350 LINK scsi_pr_ut 00:14:27.350 LINK cpuset_ut 00:14:27.350 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:14:27.609 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:14:27.609 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:14:27.609 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:14:27.609 LINK pci_event_ut 00:14:27.609 LINK nvme_ns_cmd_ut 00:14:27.609 LINK crc32_ieee_ut 00:14:27.609 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:14:27.609 LINK nvme_poll_group_ut 00:14:27.609 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:14:27.609 CC test/unit/lib/util/dif.c/dif_ut.o 00:14:27.609 LINK crc64_ut 00:14:27.609 LINK crc32c_ut 00:14:27.869 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:14:27.869 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:14:27.869 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:14:27.869 LINK nvme_quirks_ut 00:14:27.869 LINK subsystem_ut 00:14:27.869 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:14:27.869 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:14:28.127 LINK rpc_ut 00:14:28.127 LINK rpc_ut 00:14:28.127 LINK nvme_qpair_ut 00:14:28.127 CC test/unit/lib/util/iov.c/iov_ut.o 00:14:28.127 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:14:28.127 LINK idxd_user_ut 00:14:28.386 CC test/unit/lib/util/math.c/math_ut.o 00:14:28.386 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:14:28.386 LINK math_ut 00:14:28.645 CC test/unit/lib/rdma/common.c/common_ut.o 00:14:28.645 LINK nvme_transport_ut 00:14:28.645 LINK iov_ut 00:14:28.645 LINK nvme_io_msg_ut 00:14:28.645 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:14:28.645 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:14:28.645 CC test/unit/lib/util/string.c/string_ut.o 00:14:28.645 LINK rdma_ut 00:14:28.904 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:14:28.904 LINK transport_ut 00:14:28.904 LINK common_ut 00:14:28.904 LINK idxd_ut 00:14:28.904 CC test/unit/lib/util/xor.c/xor_ut.o 00:14:28.904 LINK string_ut 00:14:28.904 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:14:29.163 LINK pipe_ut 00:14:29.421 LINK nvme_fabric_ut 00:14:29.421 LINK xor_ut 00:14:29.421 LINK nvme_tcp_ut 00:14:29.421 LINK nvme_pcie_common_ut 00:14:29.421 LINK nvme_opal_ut 00:14:30.358 LINK nvme_rdma_ut 00:14:30.617 LINK dif_ut 00:14:30.617 19:15:07 -- spdk/autopackage.sh@38 -- $ gmake -j10 clean 00:14:30.877 gmake[1]: Nothing to be done for 'clean'. 00:14:30.877 ps: stdin: not a terminal 00:14:34.164 gmake[2]: Nothing to be done for 'clean'. 00:14:34.422 19:15:11 -- spdk/autopackage.sh@40 -- $ timing_exit build_release 00:14:34.422 19:15:11 -- common/autotest_common.sh@716 -- $ xtrace_disable 00:14:34.422 19:15:11 -- common/autotest_common.sh@10 -- $ set +x 00:14:34.422 19:15:11 -- spdk/autopackage.sh@42 -- $ timing_finish 00:14:34.422 19:15:11 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:14:34.422 19:15:11 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:14:34.422 + [[ -n 1267 ]] 00:14:34.422 + sudo kill 1267 00:14:34.432 [Pipeline] } 00:14:34.453 [Pipeline] // timeout 00:14:34.458 [Pipeline] } 00:14:34.476 [Pipeline] // stage 00:14:34.481 [Pipeline] } 00:14:34.499 [Pipeline] // catchError 00:14:34.508 [Pipeline] stage 00:14:34.510 [Pipeline] { (Stop VM) 00:14:34.524 [Pipeline] sh 00:14:34.806 + vagrant halt 00:14:38.998 ==> default: Halting domain... 00:15:00.948 [Pipeline] sh 00:15:01.229 + vagrant destroy -f 00:15:04.516 ==> default: Removing domain... 00:15:04.528 [Pipeline] sh 00:15:04.811 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:15:04.821 [Pipeline] } 00:15:04.839 [Pipeline] // stage 00:15:04.845 [Pipeline] } 00:15:04.866 [Pipeline] // dir 00:15:04.871 [Pipeline] } 00:15:04.889 [Pipeline] // wrap 00:15:04.895 [Pipeline] } 00:15:04.911 [Pipeline] // catchError 00:15:04.920 [Pipeline] stage 00:15:04.922 [Pipeline] { (Epilogue) 00:15:04.937 [Pipeline] sh 00:15:05.220 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:15:05.232 [Pipeline] catchError 00:15:05.234 [Pipeline] { 00:15:05.248 [Pipeline] sh 00:15:05.529 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:15:05.788 Artifacts sizes are good 00:15:05.797 [Pipeline] } 00:15:05.815 [Pipeline] // catchError 00:15:05.826 [Pipeline] archiveArtifacts 00:15:05.833 Archiving artifacts 00:15:05.871 [Pipeline] cleanWs 00:15:05.883 [WS-CLEANUP] Deleting project workspace... 00:15:05.883 [WS-CLEANUP] Deferred wipeout is used... 00:15:05.890 [WS-CLEANUP] done 00:15:05.892 [Pipeline] } 00:15:05.911 [Pipeline] // stage 00:15:05.916 [Pipeline] } 00:15:05.933 [Pipeline] // node 00:15:05.939 [Pipeline] End of Pipeline 00:15:05.996 Finished: SUCCESS