00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1819 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3080 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.134 The recommended git tool is: git 00:00:00.134 using credential 00000000-0000-0000-0000-000000000002 00:00:00.136 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.168 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.230 > git --version # 'git version 2.39.2' 00:00:00.230 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.857 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.868 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.878 Checking out Revision 71481c63295b6b9f0ecef6c6e69e033a6109160a (FETCH_HEAD) 00:00:07.878 > git config core.sparsecheckout # timeout=10 00:00:07.888 > git read-tree -mu HEAD # timeout=10 00:00:07.904 > git checkout -f 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=5 00:00:07.923 Commit message: "jenkins/jjb-config: Disable bsc job until further notice" 00:00:07.923 > git rev-list --no-walk 71481c63295b6b9f0ecef6c6e69e033a6109160a # timeout=10 00:00:08.002 [Pipeline] Start of Pipeline 00:00:08.013 [Pipeline] library 00:00:08.014 Loading library shm_lib@master 00:00:08.015 Library shm_lib@master is cached. Copying from home. 00:00:08.029 [Pipeline] node 00:00:08.040 Running on VM-host-WFP7 in /var/jenkins/workspace/freebsd-vg-autotest 00:00:08.042 [Pipeline] { 00:00:08.055 [Pipeline] catchError 00:00:08.056 [Pipeline] { 00:00:08.069 [Pipeline] wrap 00:00:08.078 [Pipeline] { 00:00:08.086 [Pipeline] stage 00:00:08.088 [Pipeline] { (Prologue) 00:00:08.109 [Pipeline] echo 00:00:08.111 Node: VM-host-WFP7 00:00:08.117 [Pipeline] cleanWs 00:00:08.126 [WS-CLEANUP] Deleting project workspace... 00:00:08.126 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.132 [WS-CLEANUP] done 00:00:08.285 [Pipeline] setCustomBuildProperty 00:00:08.343 [Pipeline] nodesByLabel 00:00:08.344 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.352 [Pipeline] httpRequest 00:00:08.356 HttpMethod: GET 00:00:08.357 URL: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:08.357 Sending request to url: http://10.211.164.101/packages/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:08.381 Response Code: HTTP/1.1 200 OK 00:00:08.382 Success: Status code 200 is in the accepted range: 200,404 00:00:08.382 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:32.257 [Pipeline] sh 00:00:32.542 + tar --no-same-owner -xf jbp_71481c63295b6b9f0ecef6c6e69e033a6109160a.tar.gz 00:00:32.563 [Pipeline] httpRequest 00:00:32.568 HttpMethod: GET 00:00:32.568 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:32.569 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:32.582 Response Code: HTTP/1.1 200 OK 00:00:32.583 Success: Status code 200 is in the accepted range: 200,404 00:00:32.583 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:25.183 [Pipeline] sh 00:01:25.466 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:28.063 [Pipeline] sh 00:01:28.345 + git -C spdk log --oneline -n5 00:01:28.345 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:28.345 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:28.345 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:28.345 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:28.345 3b33f4333 test/nvme/cuse: Fix typo 00:01:28.365 [Pipeline] writeFile 00:01:28.382 [Pipeline] sh 00:01:28.667 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:28.679 [Pipeline] sh 00:01:28.957 + cat autorun-spdk.conf 00:01:28.957 SPDK_TEST_UNITTEST=1 00:01:28.957 SPDK_RUN_VALGRIND=0 00:01:28.957 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.957 SPDK_TEST_NVME=1 00:01:28.957 SPDK_TEST_BLOCKDEV=1 00:01:28.957 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.963 RUN_NIGHTLY=1 00:01:28.965 [Pipeline] } 00:01:28.982 [Pipeline] // stage 00:01:28.997 [Pipeline] stage 00:01:28.999 [Pipeline] { (Run VM) 00:01:29.014 [Pipeline] sh 00:01:29.296 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:29.296 + echo 'Start stage prepare_nvme.sh' 00:01:29.296 Start stage prepare_nvme.sh 00:01:29.296 + [[ -n 2 ]] 00:01:29.296 + disk_prefix=ex2 00:01:29.296 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:01:29.296 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:01:29.296 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:01:29.296 ++ SPDK_TEST_UNITTEST=1 00:01:29.296 ++ SPDK_RUN_VALGRIND=0 00:01:29.296 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.296 ++ SPDK_TEST_NVME=1 00:01:29.296 ++ SPDK_TEST_BLOCKDEV=1 00:01:29.296 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.296 ++ RUN_NIGHTLY=1 00:01:29.296 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:01:29.296 + nvme_files=() 00:01:29.296 + declare -A nvme_files 00:01:29.296 + backend_dir=/var/lib/libvirt/images/backends 00:01:29.296 + nvme_files['nvme.img']=5G 00:01:29.296 + nvme_files['nvme-cmb.img']=5G 00:01:29.296 + nvme_files['nvme-multi0.img']=4G 00:01:29.296 + nvme_files['nvme-multi1.img']=4G 00:01:29.296 + nvme_files['nvme-multi2.img']=4G 00:01:29.296 + nvme_files['nvme-openstack.img']=8G 00:01:29.296 + nvme_files['nvme-zns.img']=5G 00:01:29.296 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:29.296 + (( SPDK_TEST_FTL == 1 )) 00:01:29.296 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:29.296 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:29.296 + for nvme in "${!nvme_files[@]}" 00:01:29.296 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:29.296 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:29.556 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:29.556 + echo 'End stage prepare_nvme.sh' 00:01:29.556 End stage prepare_nvme.sh 00:01:29.567 [Pipeline] sh 00:01:29.846 + DISTRO=freebsd13 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:29.846 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f freebsd13 00:01:29.846 00:01:29.846 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:01:29.846 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:01:29.846 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:01:29.846 HELP=0 00:01:29.846 DRY_RUN=0 00:01:29.846 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:01:29.846 NVME_DISKS_TYPE=nvme, 00:01:29.846 NVME_AUTO_CREATE=0 00:01:29.846 NVME_DISKS_NAMESPACES=, 00:01:29.846 NVME_CMB=, 00:01:29.846 NVME_PMR=, 00:01:29.846 NVME_ZNS=, 00:01:29.846 NVME_MS=, 00:01:29.846 NVME_FDP=, 00:01:29.846 SPDK_VAGRANT_DISTRO=freebsd13 00:01:29.846 SPDK_VAGRANT_VMCPU=10 00:01:29.846 SPDK_VAGRANT_VMRAM=12288 00:01:29.846 SPDK_VAGRANT_PROVIDER=libvirt 00:01:29.846 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:29.846 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:29.846 SPDK_OPENSTACK_NETWORK=0 00:01:29.846 VAGRANT_PACKAGE_BOX=0 00:01:29.846 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:29.846 FORCE_DISTRO=true 00:01:29.846 VAGRANT_BOX_VERSION= 00:01:29.846 EXTRA_VAGRANTFILES= 00:01:29.846 NIC_MODEL=virtio 00:01:29.846 00:01:29.846 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt' 00:01:29.846 /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:01:31.777 Bringing machine 'default' up with 'libvirt' provider... 00:01:32.343 ==> default: Creating image (snapshot of base box volume). 00:01:32.603 ==> default: Creating domain with the following settings... 00:01:32.603 ==> default: -- Name: freebsd13-13.2-RELEASE-1712646987-2220_default_1715579620_6aa3ea82faeb8ab3e319 00:01:32.603 ==> default: -- Domain type: kvm 00:01:32.603 ==> default: -- Cpus: 10 00:01:32.603 ==> default: -- Feature: acpi 00:01:32.603 ==> default: -- Feature: apic 00:01:32.603 ==> default: -- Feature: pae 00:01:32.603 ==> default: -- Memory: 12288M 00:01:32.603 ==> default: -- Memory Backing: hugepages: 00:01:32.603 ==> default: -- Management MAC: 00:01:32.603 ==> default: -- Loader: 00:01:32.603 ==> default: -- Nvram: 00:01:32.603 ==> default: -- Base box: spdk/freebsd13 00:01:32.603 ==> default: -- Storage pool: default 00:01:32.603 ==> default: -- Image: /var/lib/libvirt/images/freebsd13-13.2-RELEASE-1712646987-2220_default_1715579620_6aa3ea82faeb8ab3e319.img (32G) 00:01:32.603 ==> default: -- Volume Cache: default 00:01:32.603 ==> default: -- Kernel: 00:01:32.603 ==> default: -- Initrd: 00:01:32.603 ==> default: -- Graphics Type: vnc 00:01:32.603 ==> default: -- Graphics Port: -1 00:01:32.603 ==> default: -- Graphics IP: 127.0.0.1 00:01:32.603 ==> default: -- Graphics Password: Not defined 00:01:32.603 ==> default: -- Video Type: cirrus 00:01:32.603 ==> default: -- Video VRAM: 9216 00:01:32.603 ==> default: -- Sound Type: 00:01:32.603 ==> default: -- Keymap: en-us 00:01:32.603 ==> default: -- TPM Path: 00:01:32.603 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:32.603 ==> default: -- Command line args: 00:01:32.603 ==> default: -> value=-device, 00:01:32.603 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:32.603 ==> default: -> value=-drive, 00:01:32.603 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:32.603 ==> default: -> value=-device, 00:01:32.603 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:32.603 ==> default: Creating shared folders metadata... 00:01:32.603 ==> default: Starting domain. 00:01:34.509 ==> default: Waiting for domain to get an IP address... 00:01:56.462 ==> default: Waiting for SSH to become available... 00:02:08.699 ==> default: Configuring and enabling network interfaces... 00:02:11.992 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.269 ==> default: Mounting SSHFS shared folder... 00:02:18.205 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.205 ==> default: Checking Mount.. 00:02:18.774 ==> default: Folder Successfully Mounted! 00:02:18.774 ==> default: Running provisioner: file... 00:02:19.343 default: ~/.gitconfig => .gitconfig 00:02:19.602 00:02:19.602 SUCCESS! 00:02:19.602 00:02:19.602 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt and type "vagrant ssh" to use. 00:02:19.602 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:19.602 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt" to destroy all trace of vm. 00:02:19.602 00:02:19.611 [Pipeline] } 00:02:19.628 [Pipeline] // stage 00:02:19.636 [Pipeline] dir 00:02:19.637 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt 00:02:19.638 [Pipeline] { 00:02:19.653 [Pipeline] catchError 00:02:19.655 [Pipeline] { 00:02:19.669 [Pipeline] sh 00:02:19.950 + vagrant ssh-config --host vagrant 00:02:19.950 + sed -ne /^Host/,$p 00:02:19.950 + tee ssh_conf 00:02:22.486 Host vagrant 00:02:22.486 HostName 192.168.121.20 00:02:22.486 User vagrant 00:02:22.486 Port 22 00:02:22.486 UserKnownHostsFile /dev/null 00:02:22.486 StrictHostKeyChecking no 00:02:22.486 PasswordAuthentication no 00:02:22.486 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd13/13.2-RELEASE-1712646987-2220/libvirt/freebsd13 00:02:22.486 IdentitiesOnly yes 00:02:22.486 LogLevel FATAL 00:02:22.486 ForwardAgent yes 00:02:22.486 ForwardX11 yes 00:02:22.486 00:02:22.499 [Pipeline] withEnv 00:02:22.501 [Pipeline] { 00:02:22.517 [Pipeline] sh 00:02:22.824 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:22.824 source /etc/os-release 00:02:22.824 [[ -e /image.version ]] && img=$(< /image.version) 00:02:22.824 # Minimal, systemd-like check. 00:02:22.824 if [[ -e /.dockerenv ]]; then 00:02:22.824 # Clear garbage from the node's name: 00:02:22.824 # agt-er_autotest_547-896 -> autotest_547-896 00:02:22.824 # $HOSTNAME is the actual container id 00:02:22.824 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:22.824 if mountpoint -q /etc/hostname; then 00:02:22.824 # We can assume this is a mount from a host where container is running, 00:02:22.824 # so fetch its hostname to easily identify the target swarm worker. 00:02:22.824 container="$(< /etc/hostname) ($agent)" 00:02:22.824 else 00:02:22.824 # Fallback 00:02:22.824 container=$agent 00:02:22.824 fi 00:02:22.824 fi 00:02:22.824 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:22.824 00:02:22.850 [Pipeline] } 00:02:22.870 [Pipeline] // withEnv 00:02:22.880 [Pipeline] setCustomBuildProperty 00:02:22.895 [Pipeline] stage 00:02:22.897 [Pipeline] { (Tests) 00:02:22.919 [Pipeline] sh 00:02:23.198 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:23.215 [Pipeline] timeout 00:02:23.215 Timeout set to expire in 1 hr 0 min 00:02:23.217 [Pipeline] { 00:02:23.235 [Pipeline] sh 00:02:23.513 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:24.079 HEAD is now at 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:02:24.093 [Pipeline] sh 00:02:24.371 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.386 [Pipeline] sh 00:02:24.664 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:24.681 [Pipeline] sh 00:02:24.960 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang ./autoruner.sh spdk_repo 00:02:24.960 ++ readlink -f spdk_repo 00:02:24.960 + DIR_ROOT=/usr/home/vagrant/spdk_repo 00:02:24.960 + [[ -n /usr/home/vagrant/spdk_repo ]] 00:02:24.960 + DIR_SPDK=/usr/home/vagrant/spdk_repo/spdk 00:02:24.960 + DIR_OUTPUT=/usr/home/vagrant/spdk_repo/output 00:02:24.960 + [[ -d /usr/home/vagrant/spdk_repo/spdk ]] 00:02:24.960 + [[ ! -d /usr/home/vagrant/spdk_repo/output ]] 00:02:24.960 + [[ -d /usr/home/vagrant/spdk_repo/output ]] 00:02:24.960 + cd /usr/home/vagrant/spdk_repo 00:02:24.960 + source /etc/os-release 00:02:24.960 ++ NAME=FreeBSD 00:02:24.960 ++ VERSION=13.2-RELEASE 00:02:24.960 ++ VERSION_ID=13.2 00:02:24.960 ++ ID=freebsd 00:02:24.960 ++ ANSI_COLOR='0;31' 00:02:24.960 ++ PRETTY_NAME='FreeBSD 13.2-RELEASE' 00:02:24.960 ++ CPE_NAME=cpe:/o:freebsd:freebsd:13.2 00:02:24.960 ++ HOME_URL=https://FreeBSD.org/ 00:02:24.960 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:24.960 + uname -a 00:02:24.960 FreeBSD freebsd-cloud-1712646987-2220.local 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64 00:02:24.960 + sudo /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.218 Contigmem (not present) 00:02:25.219 Buffer Size: not set 00:02:25.219 Num Buffers: not set 00:02:25.219 00:02:25.219 00:02:25.219 Type BDF Vendor Device Driver 00:02:25.219 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:02:25.219 + rm -f /tmp/spdk-ld-path 00:02:25.219 + source autorun-spdk.conf 00:02:25.219 ++ SPDK_TEST_UNITTEST=1 00:02:25.219 ++ SPDK_RUN_VALGRIND=0 00:02:25.219 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.219 ++ SPDK_TEST_NVME=1 00:02:25.219 ++ SPDK_TEST_BLOCKDEV=1 00:02:25.219 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.219 ++ RUN_NIGHTLY=1 00:02:25.219 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.219 + [[ -n '' ]] 00:02:25.219 + sudo git config --global --add safe.directory /usr/home/vagrant/spdk_repo/spdk 00:02:25.219 + for M in /var/spdk/build-*-manifest.txt 00:02:25.219 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.219 + cp /var/spdk/build-pkg-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:02:25.219 + for M in /var/spdk/build-*-manifest.txt 00:02:25.219 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.219 + cp /var/spdk/build-repo-manifest.txt /usr/home/vagrant/spdk_repo/output/ 00:02:25.219 ++ uname 00:02:25.219 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:25.219 + dmesg_pid=1257 00:02:25.219 + tail -F /var/log/messages 00:02:25.219 + [[ FreeBSD == FreeBSD ]] 00:02:25.219 + export LC_ALL=C LC_CTYPE=C 00:02:25.219 + LC_ALL=C 00:02:25.219 + LC_CTYPE=C 00:02:25.219 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.219 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.219 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.219 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.219 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.219 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.219 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.219 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:25.219 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:25.219 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:25.219 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.219 + spdk/autorun.sh /usr/home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.219 Test configuration: 00:02:25.219 SPDK_TEST_UNITTEST=1 00:02:25.219 SPDK_RUN_VALGRIND=0 00:02:25.219 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.219 SPDK_TEST_NVME=1 00:02:25.219 SPDK_TEST_BLOCKDEV=1 00:02:25.219 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.478 RUN_NIGHTLY=1 05:54:33 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:25.478 05:54:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:25.478 05:54:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.478 05:54:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.478 05:54:33 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:25.478 05:54:33 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:25.478 05:54:33 -- paths/export.sh@4 -- $ export PATH 00:02:25.478 05:54:33 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:25.478 05:54:33 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:02:25.478 05:54:33 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:25.478 05:54:33 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715579673.XXXXXX 00:02:25.478 05:54:33 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715579673.XXXXXX.NMI18Cc6 00:02:25.478 05:54:33 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:25.478 05:54:33 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:02:25.478 05:54:33 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:02:25.478 05:54:33 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:25.478 05:54:33 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:25.478 05:54:33 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:25.478 05:54:33 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:25.478 05:54:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.478 05:54:33 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:25.478 05:54:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.478 05:54:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.478 05:54:33 -- spdk/autobuild.sh@13 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:02:25.478 05:54:33 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.478 Mon May 13 05:54:33 UTC 2024 00:02:25.478 05:54:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.478 LTS-24-g36faa8c31 00:02:25.478 05:54:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:25.478 05:54:33 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:25.478 05:54:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:25.478 05:54:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.478 05:54:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.478 05:54:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.478 05:54:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.478 05:54:33 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:25.478 05:54:33 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:25.478 05:54:33 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:02:25.478 05:54:33 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:25.478 05:54:33 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:25.478 05:54:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.478 ************************************ 00:02:25.478 START TEST unittest_build 00:02:25.478 ************************************ 00:02:25.478 05:54:33 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:02:25.478 05:54:33 -- common/autobuild_common.sh@402 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:26.411 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:26.411 are only supported on Linux. Turning off default feature. 00:02:26.411 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:26.412 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:02:26.978 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:26.978 Using 'verbs' RDMA provider 00:02:39.452 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:51.662 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:51.662 Creating mk/config.mk...done. 00:02:51.662 Creating mk/cc.flags.mk...done. 00:02:51.662 Type 'gmake' to build. 00:02:51.662 05:54:59 -- common/autobuild_common.sh@403 -- $ gmake -j10 00:02:51.921 gmake[1]: Nothing to be done for 'all'. 00:02:55.212 ps: stdin: not a terminal 00:02:59.410 The Meson build system 00:02:59.410 Version: 1.3.1 00:02:59.410 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:02:59.410 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:59.410 Build type: native build 00:02:59.410 Program cat found: YES (/bin/cat) 00:02:59.410 Project name: DPDK 00:02:59.410 Project version: 23.11.0 00:02:59.410 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:02:59.410 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:02:59.410 Host machine cpu family: x86_64 00:02:59.410 Host machine cpu: x86_64 00:02:59.410 Message: ## Building in Developer Mode ## 00:02:59.410 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:59.410 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:59.410 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.410 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:59.410 Program cat found: YES (/bin/cat) 00:02:59.410 Compiler for C supports arguments -march=native: YES 00:02:59.410 Checking for size of "void *" : 8 00:02:59.410 Checking for size of "void *" : 8 (cached) 00:02:59.410 Library m found: YES 00:02:59.410 Library numa found: NO 00:02:59.410 Library fdt found: NO 00:02:59.410 Library execinfo found: YES 00:02:59.410 Has header "execinfo.h" : YES 00:02:59.410 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:02:59.410 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.410 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.410 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.410 Run-time dependency openssl found: YES 3.0.13 00:02:59.410 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:59.410 Library pcap found: YES 00:02:59.410 Has header "pcap.h" with dependency -lpcap: YES 00:02:59.410 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.410 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.410 Compiler for C supports arguments -Wformat: YES 00:02:59.410 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:59.410 Compiler for C supports arguments -Wformat-security: YES 00:02:59.410 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.410 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.410 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.410 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.410 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.410 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.410 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.410 Compiler for C supports arguments -Wundef: YES 00:02:59.410 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.410 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.410 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:59.410 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.410 Compiler for C supports arguments -mavx512f: YES 00:02:59.410 Checking if "AVX512 checking" compiles: YES 00:02:59.410 Fetching value of define "__SSE4_2__" : 1 00:02:59.410 Fetching value of define "__AES__" : 1 00:02:59.410 Fetching value of define "__AVX__" : 1 00:02:59.410 Fetching value of define "__AVX2__" : 1 00:02:59.410 Fetching value of define "__AVX512BW__" : 1 00:02:59.410 Fetching value of define "__AVX512CD__" : 1 00:02:59.410 Fetching value of define "__AVX512DQ__" : 1 00:02:59.410 Fetching value of define "__AVX512F__" : 1 00:02:59.410 Fetching value of define "__AVX512VL__" : 1 00:02:59.410 Fetching value of define "__PCLMUL__" : 1 00:02:59.410 Fetching value of define "__RDRND__" : 1 00:02:59.411 Fetching value of define "__RDSEED__" : 1 00:02:59.411 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:59.411 Fetching value of define "__znver1__" : (undefined) 00:02:59.411 Fetching value of define "__znver2__" : (undefined) 00:02:59.411 Fetching value of define "__znver3__" : (undefined) 00:02:59.411 Fetching value of define "__znver4__" : (undefined) 00:02:59.411 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:59.411 Message: lib/log: Defining dependency "log" 00:02:59.411 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.411 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.411 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:59.411 Checking for function "getentropy" : YES 00:02:59.411 Message: lib/eal: Defining dependency "eal" 00:02:59.411 Message: lib/ring: Defining dependency "ring" 00:02:59.411 Message: lib/rcu: Defining dependency "rcu" 00:02:59.411 Message: lib/mempool: Defining dependency "mempool" 00:02:59.411 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.411 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.411 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.411 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.411 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.411 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.411 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:59.411 Compiler for C supports arguments -mpclmul: YES 00:02:59.411 Compiler for C supports arguments -maes: YES 00:02:59.411 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.411 Compiler for C supports arguments -mavx512bw: YES 00:02:59.411 Compiler for C supports arguments -mavx512dq: YES 00:02:59.411 Compiler for C supports arguments -mavx512vl: YES 00:02:59.411 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.411 Compiler for C supports arguments -mavx2: YES 00:02:59.411 Compiler for C supports arguments -mavx: YES 00:02:59.411 Message: lib/net: Defining dependency "net" 00:02:59.411 Message: lib/meter: Defining dependency "meter" 00:02:59.411 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.411 Message: lib/pci: Defining dependency "pci" 00:02:59.411 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.411 Message: lib/hash: Defining dependency "hash" 00:02:59.411 Message: lib/timer: Defining dependency "timer" 00:02:59.411 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.411 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.411 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.411 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.411 Message: lib/reorder: Defining dependency "reorder" 00:02:59.411 Message: lib/security: Defining dependency "security" 00:02:59.411 Has header "linux/userfaultfd.h" : NO 00:02:59.411 Has header "linux/vduse.h" : NO 00:02:59.411 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:59.411 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.411 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.411 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:59.411 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:59.411 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:59.411 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:59.411 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:59.411 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:59.411 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:59.411 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:59.411 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:59.411 Configuring doxy-api-html.conf using configuration 00:02:59.411 Configuring doxy-api-man.conf using configuration 00:02:59.411 Program mandb found: NO 00:02:59.411 Program sphinx-build found: NO 00:02:59.411 Configuring rte_build_config.h using configuration 00:02:59.411 Message: 00:02:59.411 ================= 00:02:59.411 Applications Enabled 00:02:59.411 ================= 00:02:59.411 00:02:59.411 apps: 00:02:59.411 00:02:59.411 00:02:59.411 Message: 00:02:59.411 ================= 00:02:59.411 Libraries Enabled 00:02:59.411 ================= 00:02:59.411 00:02:59.411 libs: 00:02:59.411 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:59.411 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:59.411 cryptodev, dmadev, reorder, security, 00:02:59.411 00:02:59.411 Message: 00:02:59.411 =============== 00:02:59.411 Drivers Enabled 00:02:59.411 =============== 00:02:59.411 00:02:59.411 common: 00:02:59.411 00:02:59.411 bus: 00:02:59.411 pci, vdev, 00:02:59.411 mempool: 00:02:59.411 ring, 00:02:59.411 dma: 00:02:59.411 00:02:59.411 net: 00:02:59.411 00:02:59.411 crypto: 00:02:59.411 00:02:59.411 compress: 00:02:59.411 00:02:59.411 00:02:59.411 Message: 00:02:59.411 ================= 00:02:59.411 Content Skipped 00:02:59.411 ================= 00:02:59.411 00:02:59.411 apps: 00:02:59.411 dumpcap: explicitly disabled via build config 00:02:59.411 graph: explicitly disabled via build config 00:02:59.411 pdump: explicitly disabled via build config 00:02:59.411 proc-info: explicitly disabled via build config 00:02:59.411 test-acl: explicitly disabled via build config 00:02:59.411 test-bbdev: explicitly disabled via build config 00:02:59.411 test-cmdline: explicitly disabled via build config 00:02:59.411 test-compress-perf: explicitly disabled via build config 00:02:59.411 test-crypto-perf: explicitly disabled via build config 00:02:59.411 test-dma-perf: explicitly disabled via build config 00:02:59.411 test-eventdev: explicitly disabled via build config 00:02:59.411 test-fib: explicitly disabled via build config 00:02:59.411 test-flow-perf: explicitly disabled via build config 00:02:59.411 test-gpudev: explicitly disabled via build config 00:02:59.411 test-mldev: explicitly disabled via build config 00:02:59.411 test-pipeline: explicitly disabled via build config 00:02:59.411 test-pmd: explicitly disabled via build config 00:02:59.411 test-regex: explicitly disabled via build config 00:02:59.411 test-sad: explicitly disabled via build config 00:02:59.411 test-security-perf: explicitly disabled via build config 00:02:59.411 00:02:59.411 libs: 00:02:59.411 metrics: explicitly disabled via build config 00:02:59.411 acl: explicitly disabled via build config 00:02:59.411 bbdev: explicitly disabled via build config 00:02:59.411 bitratestats: explicitly disabled via build config 00:02:59.411 bpf: explicitly disabled via build config 00:02:59.411 cfgfile: explicitly disabled via build config 00:02:59.411 distributor: explicitly disabled via build config 00:02:59.411 efd: explicitly disabled via build config 00:02:59.411 eventdev: explicitly disabled via build config 00:02:59.411 dispatcher: explicitly disabled via build config 00:02:59.411 gpudev: explicitly disabled via build config 00:02:59.411 gro: explicitly disabled via build config 00:02:59.411 gso: explicitly disabled via build config 00:02:59.411 ip_frag: explicitly disabled via build config 00:02:59.411 jobstats: explicitly disabled via build config 00:02:59.411 latencystats: explicitly disabled via build config 00:02:59.411 lpm: explicitly disabled via build config 00:02:59.411 member: explicitly disabled via build config 00:02:59.411 pcapng: explicitly disabled via build config 00:02:59.411 power: only supported on Linux 00:02:59.411 rawdev: explicitly disabled via build config 00:02:59.411 regexdev: explicitly disabled via build config 00:02:59.411 mldev: explicitly disabled via build config 00:02:59.411 rib: explicitly disabled via build config 00:02:59.411 sched: explicitly disabled via build config 00:02:59.411 stack: explicitly disabled via build config 00:02:59.411 vhost: only supported on Linux 00:02:59.411 ipsec: explicitly disabled via build config 00:02:59.411 pdcp: explicitly disabled via build config 00:02:59.411 fib: explicitly disabled via build config 00:02:59.411 port: explicitly disabled via build config 00:02:59.411 pdump: explicitly disabled via build config 00:02:59.411 table: explicitly disabled via build config 00:02:59.411 pipeline: explicitly disabled via build config 00:02:59.411 graph: explicitly disabled via build config 00:02:59.411 node: explicitly disabled via build config 00:02:59.411 00:02:59.411 drivers: 00:02:59.411 common/cpt: not in enabled drivers build config 00:02:59.411 common/dpaax: not in enabled drivers build config 00:02:59.411 common/iavf: not in enabled drivers build config 00:02:59.411 common/idpf: not in enabled drivers build config 00:02:59.411 common/mvep: not in enabled drivers build config 00:02:59.411 common/octeontx: not in enabled drivers build config 00:02:59.411 bus/auxiliary: not in enabled drivers build config 00:02:59.411 bus/cdx: not in enabled drivers build config 00:02:59.411 bus/dpaa: not in enabled drivers build config 00:02:59.411 bus/fslmc: not in enabled drivers build config 00:02:59.411 bus/ifpga: not in enabled drivers build config 00:02:59.411 bus/platform: not in enabled drivers build config 00:02:59.411 bus/vmbus: not in enabled drivers build config 00:02:59.411 common/cnxk: not in enabled drivers build config 00:02:59.411 common/mlx5: not in enabled drivers build config 00:02:59.411 common/nfp: not in enabled drivers build config 00:02:59.411 common/qat: not in enabled drivers build config 00:02:59.411 common/sfc_efx: not in enabled drivers build config 00:02:59.411 mempool/bucket: not in enabled drivers build config 00:02:59.411 mempool/cnxk: not in enabled drivers build config 00:02:59.411 mempool/dpaa: not in enabled drivers build config 00:02:59.411 mempool/dpaa2: not in enabled drivers build config 00:02:59.411 mempool/octeontx: not in enabled drivers build config 00:02:59.411 mempool/stack: not in enabled drivers build config 00:02:59.411 dma/cnxk: not in enabled drivers build config 00:02:59.411 dma/dpaa: not in enabled drivers build config 00:02:59.411 dma/dpaa2: not in enabled drivers build config 00:02:59.411 dma/hisilicon: not in enabled drivers build config 00:02:59.411 dma/idxd: not in enabled drivers build config 00:02:59.411 dma/ioat: not in enabled drivers build config 00:02:59.411 dma/skeleton: not in enabled drivers build config 00:02:59.411 net/af_packet: not in enabled drivers build config 00:02:59.411 net/af_xdp: not in enabled drivers build config 00:02:59.411 net/ark: not in enabled drivers build config 00:02:59.411 net/atlantic: not in enabled drivers build config 00:02:59.411 net/avp: not in enabled drivers build config 00:02:59.411 net/axgbe: not in enabled drivers build config 00:02:59.411 net/bnx2x: not in enabled drivers build config 00:02:59.411 net/bnxt: not in enabled drivers build config 00:02:59.411 net/bonding: not in enabled drivers build config 00:02:59.411 net/cnxk: not in enabled drivers build config 00:02:59.412 net/cpfl: not in enabled drivers build config 00:02:59.412 net/cxgbe: not in enabled drivers build config 00:02:59.412 net/dpaa: not in enabled drivers build config 00:02:59.412 net/dpaa2: not in enabled drivers build config 00:02:59.412 net/e1000: not in enabled drivers build config 00:02:59.412 net/ena: not in enabled drivers build config 00:02:59.412 net/enetc: not in enabled drivers build config 00:02:59.412 net/enetfec: not in enabled drivers build config 00:02:59.412 net/enic: not in enabled drivers build config 00:02:59.412 net/failsafe: not in enabled drivers build config 00:02:59.412 net/fm10k: not in enabled drivers build config 00:02:59.412 net/gve: not in enabled drivers build config 00:02:59.412 net/hinic: not in enabled drivers build config 00:02:59.412 net/hns3: not in enabled drivers build config 00:02:59.412 net/i40e: not in enabled drivers build config 00:02:59.412 net/iavf: not in enabled drivers build config 00:02:59.412 net/ice: not in enabled drivers build config 00:02:59.412 net/idpf: not in enabled drivers build config 00:02:59.412 net/igc: not in enabled drivers build config 00:02:59.412 net/ionic: not in enabled drivers build config 00:02:59.412 net/ipn3ke: not in enabled drivers build config 00:02:59.412 net/ixgbe: not in enabled drivers build config 00:02:59.412 net/mana: not in enabled drivers build config 00:02:59.412 net/memif: not in enabled drivers build config 00:02:59.412 net/mlx4: not in enabled drivers build config 00:02:59.412 net/mlx5: not in enabled drivers build config 00:02:59.412 net/mvneta: not in enabled drivers build config 00:02:59.412 net/mvpp2: not in enabled drivers build config 00:02:59.412 net/netvsc: not in enabled drivers build config 00:02:59.412 net/nfb: not in enabled drivers build config 00:02:59.412 net/nfp: not in enabled drivers build config 00:02:59.412 net/ngbe: not in enabled drivers build config 00:02:59.412 net/null: not in enabled drivers build config 00:02:59.412 net/octeontx: not in enabled drivers build config 00:02:59.412 net/octeon_ep: not in enabled drivers build config 00:02:59.412 net/pcap: not in enabled drivers build config 00:02:59.412 net/pfe: not in enabled drivers build config 00:02:59.412 net/qede: not in enabled drivers build config 00:02:59.412 net/ring: not in enabled drivers build config 00:02:59.412 net/sfc: not in enabled drivers build config 00:02:59.412 net/softnic: not in enabled drivers build config 00:02:59.412 net/tap: not in enabled drivers build config 00:02:59.412 net/thunderx: not in enabled drivers build config 00:02:59.412 net/txgbe: not in enabled drivers build config 00:02:59.412 net/vdev_netvsc: not in enabled drivers build config 00:02:59.412 net/vhost: not in enabled drivers build config 00:02:59.412 net/virtio: not in enabled drivers build config 00:02:59.412 net/vmxnet3: not in enabled drivers build config 00:02:59.412 raw/*: missing internal dependency, "rawdev" 00:02:59.412 crypto/armv8: not in enabled drivers build config 00:02:59.412 crypto/bcmfs: not in enabled drivers build config 00:02:59.412 crypto/caam_jr: not in enabled drivers build config 00:02:59.412 crypto/ccp: not in enabled drivers build config 00:02:59.412 crypto/cnxk: not in enabled drivers build config 00:02:59.412 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.412 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.412 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.412 crypto/mlx5: not in enabled drivers build config 00:02:59.412 crypto/mvsam: not in enabled drivers build config 00:02:59.412 crypto/nitrox: not in enabled drivers build config 00:02:59.412 crypto/null: not in enabled drivers build config 00:02:59.412 crypto/octeontx: not in enabled drivers build config 00:02:59.412 crypto/openssl: not in enabled drivers build config 00:02:59.412 crypto/scheduler: not in enabled drivers build config 00:02:59.412 crypto/uadk: not in enabled drivers build config 00:02:59.412 crypto/virtio: not in enabled drivers build config 00:02:59.412 compress/isal: not in enabled drivers build config 00:02:59.412 compress/mlx5: not in enabled drivers build config 00:02:59.412 compress/octeontx: not in enabled drivers build config 00:02:59.412 compress/zlib: not in enabled drivers build config 00:02:59.412 regex/*: missing internal dependency, "regexdev" 00:02:59.412 ml/*: missing internal dependency, "mldev" 00:02:59.412 vdpa/*: missing internal dependency, "vhost" 00:02:59.412 event/*: missing internal dependency, "eventdev" 00:02:59.412 baseband/*: missing internal dependency, "bbdev" 00:02:59.412 gpu/*: missing internal dependency, "gpudev" 00:02:59.412 00:02:59.412 00:02:59.672 Build targets in project: 81 00:02:59.672 00:02:59.672 DPDK 23.11.0 00:02:59.672 00:02:59.672 User defined options 00:02:59.672 buildtype : debug 00:02:59.672 default_library : static 00:02:59.672 libdir : lib 00:02:59.672 prefix : / 00:02:59.672 c_args : -fPIC -Werror 00:02:59.672 c_link_args : 00:02:59.672 cpu_instruction_set: native 00:02:59.672 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:59.672 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:59.672 enable_docs : false 00:02:59.672 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:59.672 enable_kmods : true 00:02:59.672 tests : false 00:02:59.672 00:02:59.672 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:59.932 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:59.932 [1/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:59.932 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.932 [3/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:59.932 [4/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.932 [5/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:59.932 [6/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.932 [7/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:00.192 [8/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:00.192 [9/231] Linking static target lib/librte_log.a 00:03:00.192 [10/231] Linking static target lib/librte_kvargs.a 00:03:00.192 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:00.192 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:00.452 [13/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:00.452 [14/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.452 [15/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.452 [16/231] Linking static target lib/librte_telemetry.a 00:03:00.452 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:00.452 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:00.452 [19/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.452 [20/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:00.452 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:00.452 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:00.712 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:00.712 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.712 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:00.712 [26/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:00.712 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:00.712 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:00.712 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:00.712 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:00.712 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:00.712 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:00.712 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.712 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:00.712 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:00.712 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:00.972 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.972 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.972 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.972 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:00.972 [41/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.972 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.972 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.972 [44/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:00.972 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.972 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:00.972 [47/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:01.232 [48/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.232 [49/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:01.232 [50/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:01.232 [51/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:03:01.232 [52/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:01.232 [53/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:01.232 [54/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:03:01.232 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:01.232 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:01.232 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:01.232 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:01.232 [59/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.232 [60/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:01.491 [61/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:01.491 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:03:01.491 [63/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:03:01.491 [64/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:01.491 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:01.491 [66/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:03:01.491 [67/231] Linking target lib/librte_log.so.24.0 00:03:01.491 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:03:01.491 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:03:01.491 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:03:01.491 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:03:01.491 [72/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:03:01.753 [73/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:03:01.753 [74/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:01.753 [75/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:01.753 [76/231] Linking static target lib/librte_eal.a 00:03:01.753 [77/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:01.753 [78/231] Linking static target lib/librte_ring.a 00:03:01.753 [79/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:02.018 [80/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:02.018 [81/231] Linking target lib/librte_kvargs.so.24.0 00:03:02.018 [82/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:02.018 [83/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:02.018 [84/231] Linking static target lib/librte_rcu.a 00:03:02.018 [85/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:02.018 [86/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:02.018 [87/231] Linking static target lib/librte_mempool.a 00:03:02.018 [88/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:02.018 [89/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:02.018 [90/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:02.018 [91/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:02.018 [92/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.018 [93/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.277 [94/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.277 [95/231] Linking target lib/librte_telemetry.so.24.0 00:03:02.277 [96/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:02.277 [97/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:02.277 [98/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:02.277 [99/231] Linking static target lib/librte_mbuf.a 00:03:02.278 [100/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:02.278 [101/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:02.278 [102/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:02.278 [103/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:02.278 [104/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:02.278 [105/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:02.278 [106/231] Linking static target lib/librte_net.a 00:03:02.278 [107/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.278 [108/231] Linking static target lib/librte_meter.a 00:03:02.537 [109/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:02.537 [110/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.537 [111/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.537 [112/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.537 [113/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.537 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:02.797 [115/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.797 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.797 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.797 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.797 [119/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.797 [120/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.055 [121/231] Linking static target lib/librte_pci.a 00:03:03.055 [122/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.055 [123/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.055 [124/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.055 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.055 [126/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.055 [127/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.055 [128/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.055 [129/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.055 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.055 [131/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.055 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.055 [133/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:03.055 [134/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.055 [135/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.055 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.055 [137/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.313 [138/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.313 [139/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.313 [140/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.313 [141/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.314 [142/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.314 [143/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.314 [144/231] Linking static target lib/librte_cmdline.a 00:03:03.314 [145/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.314 [146/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.314 [147/231] Linking static target lib/librte_ethdev.a 00:03:03.314 [148/231] Linking static target lib/librte_timer.a 00:03:03.573 [149/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.573 [150/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:03.573 [151/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.573 [152/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:03.573 [153/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:03.573 [154/231] Linking static target lib/librte_compressdev.a 00:03:03.573 [155/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.573 [156/231] Linking static target lib/librte_hash.a 00:03:03.573 [157/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:03.573 [158/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.832 [159/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.832 [160/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:03.832 [161/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.832 [162/231] Linking static target lib/librte_dmadev.a 00:03:03.832 [163/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.832 [164/231] Linking static target lib/librte_reorder.a 00:03:03.832 [165/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.832 [166/231] Linking static target lib/librte_security.a 00:03:03.832 [167/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:03.832 [168/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.092 [169/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.092 [170/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.092 [171/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:04.092 [172/231] Linking static target lib/librte_cryptodev.a 00:03:04.092 [173/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:03:04.092 [174/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:04.092 [175/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.092 [176/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.092 [177/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.092 [178/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.092 [179/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.351 [180/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.351 [181/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.351 [182/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:04.351 [183/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.351 [184/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:04.351 [185/231] Linking static target drivers/librte_bus_pci.a 00:03:04.351 [186/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:04.351 [187/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:04.351 [188/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:04.351 [189/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.351 [190/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:04.351 [191/231] Linking static target drivers/librte_bus_vdev.a 00:03:04.622 [192/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.622 [193/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.622 [194/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:04.622 [195/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.622 [196/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:04.622 [197/231] Linking static target drivers/librte_mempool_ring.a 00:03:04.622 [198/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.562 [199/231] Generating kernel/freebsd/contigmem with a custom command 00:03:05.562 machine -> /usr/src/sys/amd64/include 00:03:05.562 x86 -> /usr/src/sys/x86/include 00:03:05.562 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:03:05.562 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:03:05.562 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:03:05.562 touch opt_global.h 00:03:05.562 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:03:05.562 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:03:05.562 :> export_syms 00:03:05.562 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:03:05.562 objcopy --strip-debug contigmem.ko 00:03:06.132 [200/231] Generating kernel/freebsd/nic_uio with a custom command 00:03:06.132 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:03:06.132 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:03:06.132 :> export_syms 00:03:06.132 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:03:06.132 objcopy --strip-debug nic_uio.ko 00:03:10.327 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.562 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.821 [203/231] Linking target lib/librte_eal.so.24.0 00:03:14.821 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:14.821 [205/231] Linking target lib/librte_ring.so.24.0 00:03:14.821 [206/231] Linking target lib/librte_pci.so.24.0 00:03:14.821 [207/231] Linking target lib/librte_dmadev.so.24.0 00:03:14.821 [208/231] Linking target drivers/librte_bus_vdev.so.24.0 00:03:14.821 [209/231] Linking target lib/librte_timer.so.24.0 00:03:14.821 [210/231] Linking target lib/librte_meter.so.24.0 00:03:15.081 [211/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:15.081 [212/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:15.081 [213/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:15.081 [214/231] Linking target lib/librte_rcu.so.24.0 00:03:15.081 [215/231] Linking target lib/librte_mempool.so.24.0 00:03:15.081 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:03:15.081 [217/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:15.081 [218/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:15.081 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:03:15.081 [220/231] Linking target lib/librte_mbuf.so.24.0 00:03:15.342 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:15.342 [222/231] Linking target lib/librte_compressdev.so.24.0 00:03:15.342 [223/231] Linking target lib/librte_reorder.so.24.0 00:03:15.342 [224/231] Linking target lib/librte_net.so.24.0 00:03:15.342 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:03:15.342 [226/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:15.342 [227/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:15.610 [228/231] Linking target lib/librte_security.so.24.0 00:03:15.610 [229/231] Linking target lib/librte_cmdline.so.24.0 00:03:15.610 [230/231] Linking target lib/librte_hash.so.24.0 00:03:15.610 [231/231] Linking target lib/librte_ethdev.so.24.0 00:03:15.610 INFO: autodetecting backend as ninja 00:03:15.610 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:16.184 CC lib/log/log.o 00:03:16.184 CC lib/log/log_deprecated.o 00:03:16.184 CC lib/log/log_flags.o 00:03:16.184 CC lib/ut_mock/mock.o 00:03:16.184 CC lib/ut/ut.o 00:03:16.184 LIB libspdk_ut_mock.a 00:03:16.184 LIB libspdk_log.a 00:03:16.184 LIB libspdk_ut.a 00:03:16.459 CC lib/util/base64.o 00:03:16.459 CC lib/util/crc16.o 00:03:16.459 CC lib/util/bit_array.o 00:03:16.459 CC lib/util/cpuset.o 00:03:16.459 CC lib/util/crc32_ieee.o 00:03:16.459 CC lib/util/crc32.o 00:03:16.459 CC lib/util/crc32c.o 00:03:16.459 CC lib/ioat/ioat.o 00:03:16.459 CXX lib/trace_parser/trace.o 00:03:16.459 CC lib/dma/dma.o 00:03:16.459 CC lib/util/crc64.o 00:03:16.459 CC lib/util/dif.o 00:03:16.459 CC lib/util/fd.o 00:03:16.459 LIB libspdk_dma.a 00:03:16.459 CC lib/util/file.o 00:03:16.459 CC lib/util/hexlify.o 00:03:16.459 CC lib/util/iov.o 00:03:16.459 CC lib/util/math.o 00:03:16.459 CC lib/util/pipe.o 00:03:16.459 LIB libspdk_ioat.a 00:03:16.459 CC lib/util/strerror_tls.o 00:03:16.459 CC lib/util/string.o 00:03:16.459 CC lib/util/uuid.o 00:03:16.459 CC lib/util/fd_group.o 00:03:16.459 CC lib/util/xor.o 00:03:16.459 CC lib/util/zipf.o 00:03:16.718 LIB libspdk_util.a 00:03:16.718 CC lib/json/json_parse.o 00:03:16.718 CC lib/json/json_write.o 00:03:16.718 CC lib/json/json_util.o 00:03:16.718 CC lib/conf/conf.o 00:03:16.718 CC lib/vmd/vmd.o 00:03:16.718 CC lib/vmd/led.o 00:03:16.718 CC lib/rdma/common.o 00:03:16.718 CC lib/idxd/idxd.o 00:03:16.718 CC lib/env_dpdk/env.o 00:03:16.718 CC lib/rdma/rdma_verbs.o 00:03:16.978 CC lib/env_dpdk/memory.o 00:03:16.978 CC lib/env_dpdk/pci.o 00:03:16.978 LIB libspdk_conf.a 00:03:16.978 CC lib/idxd/idxd_user.o 00:03:16.978 LIB libspdk_json.a 00:03:16.978 CC lib/env_dpdk/init.o 00:03:16.978 CC lib/env_dpdk/threads.o 00:03:16.978 LIB libspdk_vmd.a 00:03:16.978 CC lib/env_dpdk/pci_ioat.o 00:03:16.978 LIB libspdk_rdma.a 00:03:16.978 CC lib/env_dpdk/pci_virtio.o 00:03:16.978 CC lib/env_dpdk/pci_vmd.o 00:03:16.978 LIB libspdk_idxd.a 00:03:16.978 CC lib/env_dpdk/pci_idxd.o 00:03:16.978 CC lib/env_dpdk/pci_event.o 00:03:16.978 CC lib/jsonrpc/jsonrpc_server.o 00:03:16.978 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:16.978 CC lib/jsonrpc/jsonrpc_client.o 00:03:16.978 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.978 CC lib/env_dpdk/sigbus_handler.o 00:03:16.978 CC lib/env_dpdk/pci_dpdk.o 00:03:16.978 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.978 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.978 LIB libspdk_jsonrpc.a 00:03:17.238 LIB libspdk_trace_parser.a 00:03:17.238 CC lib/rpc/rpc.o 00:03:17.238 LIB libspdk_env_dpdk.a 00:03:17.238 LIB libspdk_rpc.a 00:03:17.497 CC lib/trace/trace.o 00:03:17.497 CC lib/trace/trace_flags.o 00:03:17.497 CC lib/trace/trace_rpc.o 00:03:17.497 CC lib/notify/notify.o 00:03:17.497 CC lib/notify/notify_rpc.o 00:03:17.497 CC lib/sock/sock_rpc.o 00:03:17.497 CC lib/sock/sock.o 00:03:17.497 LIB libspdk_notify.a 00:03:17.497 LIB libspdk_trace.a 00:03:17.497 LIB libspdk_sock.a 00:03:17.757 CC lib/thread/iobuf.o 00:03:17.757 CC lib/thread/thread.o 00:03:17.757 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.757 CC lib/nvme/nvme_ctrlr.o 00:03:17.757 CC lib/nvme/nvme_fabric.o 00:03:17.757 CC lib/nvme/nvme_ns.o 00:03:17.757 CC lib/nvme/nvme_ns_cmd.o 00:03:17.757 CC lib/nvme/nvme_pcie_common.o 00:03:17.757 CC lib/nvme/nvme_pcie.o 00:03:17.757 CC lib/nvme/nvme_qpair.o 00:03:17.757 CC lib/nvme/nvme.o 00:03:17.757 LIB libspdk_thread.a 00:03:18.016 CC lib/nvme/nvme_quirks.o 00:03:18.016 CC lib/nvme/nvme_transport.o 00:03:18.016 CC lib/nvme/nvme_discovery.o 00:03:18.016 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.016 CC lib/accel/accel.o 00:03:18.016 CC lib/blob/blobstore.o 00:03:18.016 CC lib/accel/accel_rpc.o 00:03:18.016 CC lib/init/json_config.o 00:03:18.016 CC lib/accel/accel_sw.o 00:03:18.016 CC lib/init/subsystem.o 00:03:18.275 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.275 CC lib/init/subsystem_rpc.o 00:03:18.275 CC lib/blob/request.o 00:03:18.275 CC lib/nvme/nvme_tcp.o 00:03:18.275 CC lib/blob/zeroes.o 00:03:18.275 CC lib/init/rpc.o 00:03:18.275 LIB libspdk_accel.a 00:03:18.275 CC lib/nvme/nvme_opal.o 00:03:18.275 CC lib/nvme/nvme_io_msg.o 00:03:18.275 LIB libspdk_init.a 00:03:18.275 CC lib/blob/blob_bs_dev.o 00:03:18.275 CC lib/bdev/bdev.o 00:03:18.275 CC lib/bdev/bdev_rpc.o 00:03:18.275 CC lib/bdev/bdev_zone.o 00:03:18.275 CC lib/bdev/part.o 00:03:18.275 CC lib/event/app.o 00:03:18.275 CC lib/event/reactor.o 00:03:18.533 CC lib/nvme/nvme_poll_group.o 00:03:18.533 CC lib/event/log_rpc.o 00:03:18.533 CC lib/event/app_rpc.o 00:03:18.533 CC lib/nvme/nvme_zns.o 00:03:18.533 LIB libspdk_blob.a 00:03:18.533 CC lib/event/scheduler_static.o 00:03:18.533 CC lib/bdev/scsi_nvme.o 00:03:18.533 CC lib/nvme/nvme_cuse.o 00:03:18.533 LIB libspdk_event.a 00:03:18.533 CC lib/nvme/nvme_rdma.o 00:03:18.533 CC lib/blobfs/blobfs.o 00:03:18.533 CC lib/blobfs/tree.o 00:03:18.533 CC lib/lvol/lvol.o 00:03:18.792 LIB libspdk_blobfs.a 00:03:18.792 LIB libspdk_lvol.a 00:03:18.792 LIB libspdk_bdev.a 00:03:18.792 LIB libspdk_nvme.a 00:03:19.051 CC lib/scsi/dev.o 00:03:19.051 CC lib/scsi/lun.o 00:03:19.051 CC lib/scsi/port.o 00:03:19.051 CC lib/scsi/scsi.o 00:03:19.051 CC lib/scsi/scsi_bdev.o 00:03:19.051 CC lib/scsi/scsi_pr.o 00:03:19.051 CC lib/scsi/task.o 00:03:19.051 CC lib/scsi/scsi_rpc.o 00:03:19.051 CC lib/nvmf/ctrlr.o 00:03:19.051 CC lib/nvmf/subsystem.o 00:03:19.051 CC lib/nvmf/ctrlr_bdev.o 00:03:19.051 CC lib/nvmf/nvmf_rpc.o 00:03:19.051 CC lib/nvmf/transport.o 00:03:19.051 CC lib/nvmf/nvmf.o 00:03:19.051 CC lib/nvmf/ctrlr_discovery.o 00:03:19.051 CC lib/nvmf/tcp.o 00:03:19.051 CC lib/nvmf/rdma.o 00:03:19.051 LIB libspdk_scsi.a 00:03:19.051 CC lib/iscsi/conn.o 00:03:19.051 CC lib/iscsi/init_grp.o 00:03:19.051 CC lib/iscsi/iscsi.o 00:03:19.051 CC lib/iscsi/md5.o 00:03:19.310 CC lib/iscsi/param.o 00:03:19.310 CC lib/iscsi/portal_grp.o 00:03:19.310 CC lib/iscsi/tgt_node.o 00:03:19.310 CC lib/iscsi/iscsi_subsystem.o 00:03:19.310 CC lib/iscsi/iscsi_rpc.o 00:03:19.310 CC lib/iscsi/task.o 00:03:19.310 LIB libspdk_nvmf.a 00:03:19.570 LIB libspdk_iscsi.a 00:03:19.570 CC module/env_dpdk/env_dpdk_rpc.o 00:03:19.570 CC module/accel/error/accel_error.o 00:03:19.570 CC module/accel/error/accel_error_rpc.o 00:03:19.570 CC module/blob/bdev/blob_bdev.o 00:03:19.570 CC module/accel/iaa/accel_iaa.o 00:03:19.570 CC module/accel/iaa/accel_iaa_rpc.o 00:03:19.570 CC module/accel/ioat/accel_ioat.o 00:03:19.570 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:19.570 CC module/accel/dsa/accel_dsa.o 00:03:19.570 CC module/sock/posix/posix.o 00:03:19.829 LIB libspdk_env_dpdk_rpc.a 00:03:19.829 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.829 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.829 LIB libspdk_accel_error.a 00:03:19.829 LIB libspdk_scheduler_dynamic.a 00:03:19.829 LIB libspdk_accel_iaa.a 00:03:19.829 LIB libspdk_accel_ioat.a 00:03:19.829 LIB libspdk_blob_bdev.a 00:03:19.829 LIB libspdk_accel_dsa.a 00:03:19.829 LIB libspdk_sock_posix.a 00:03:19.829 CC module/bdev/null/bdev_null.o 00:03:19.829 CC module/bdev/malloc/bdev_malloc.o 00:03:19.829 CC module/bdev/passthru/vbdev_passthru.o 00:03:19.829 CC module/bdev/error/vbdev_error.o 00:03:19.829 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.829 CC module/bdev/gpt/gpt.o 00:03:19.829 CC module/bdev/delay/vbdev_delay.o 00:03:19.829 CC module/bdev/nvme/bdev_nvme.o 00:03:19.829 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.829 CC module/blobfs/bdev/blobfs_bdev.o 00:03:20.089 CC module/bdev/gpt/vbdev_gpt.o 00:03:20.089 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:20.089 CC module/bdev/null/bdev_null_rpc.o 00:03:20.089 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:20.089 CC module/bdev/error/vbdev_error_rpc.o 00:03:20.089 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.089 LIB libspdk_bdev_passthru.a 00:03:20.089 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:20.089 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.089 LIB libspdk_bdev_malloc.a 00:03:20.089 LIB libspdk_bdev_null.a 00:03:20.089 LIB libspdk_bdev_error.a 00:03:20.089 CC module/bdev/nvme/nvme_rpc.o 00:03:20.089 LIB libspdk_bdev_gpt.a 00:03:20.089 LIB libspdk_blobfs_bdev.a 00:03:20.089 CC module/bdev/nvme/bdev_mdns_client.o 00:03:20.089 LIB libspdk_bdev_delay.a 00:03:20.089 CC module/bdev/raid/bdev_raid.o 00:03:20.089 CC module/bdev/raid/bdev_raid_rpc.o 00:03:20.089 LIB libspdk_bdev_lvol.a 00:03:20.089 CC module/bdev/raid/bdev_raid_sb.o 00:03:20.089 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:20.089 CC module/bdev/split/vbdev_split.o 00:03:20.089 CC module/bdev/aio/bdev_aio.o 00:03:20.089 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:20.089 CC module/bdev/split/vbdev_split_rpc.o 00:03:20.089 CC module/bdev/aio/bdev_aio_rpc.o 00:03:20.089 CC module/bdev/raid/raid0.o 00:03:20.089 CC module/bdev/raid/raid1.o 00:03:20.089 CC module/bdev/raid/concat.o 00:03:20.089 LIB libspdk_bdev_split.a 00:03:20.089 LIB libspdk_bdev_zone_block.a 00:03:20.348 LIB libspdk_bdev_aio.a 00:03:20.348 LIB libspdk_bdev_nvme.a 00:03:20.348 LIB libspdk_bdev_raid.a 00:03:20.608 CC module/event/subsystems/vmd/vmd.o 00:03:20.608 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:20.608 CC module/event/subsystems/sock/sock.o 00:03:20.608 CC module/event/subsystems/iobuf/iobuf.o 00:03:20.608 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:20.608 CC module/event/subsystems/scheduler/scheduler.o 00:03:20.608 LIB libspdk_event_vmd.a 00:03:20.608 LIB libspdk_event_sock.a 00:03:20.608 LIB libspdk_event_iobuf.a 00:03:20.608 LIB libspdk_event_scheduler.a 00:03:20.867 CC module/event/subsystems/accel/accel.o 00:03:20.867 LIB libspdk_event_accel.a 00:03:21.126 CC module/event/subsystems/bdev/bdev.o 00:03:21.126 LIB libspdk_event_bdev.a 00:03:21.385 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.385 CC module/event/subsystems/scsi/scsi.o 00:03:21.385 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.385 LIB libspdk_event_scsi.a 00:03:21.385 LIB libspdk_event_nvmf.a 00:03:21.645 CC module/event/subsystems/iscsi/iscsi.o 00:03:21.905 LIB libspdk_event_iscsi.a 00:03:21.905 CXX app/trace/trace.o 00:03:21.905 CC examples/vmd/lsvmd/lsvmd.o 00:03:21.905 CC examples/ioat/perf/perf.o 00:03:21.905 CC examples/nvme/hello_world/hello_world.o 00:03:21.905 CC examples/accel/perf/accel_perf.o 00:03:21.905 CC examples/sock/hello_world/hello_sock.o 00:03:21.905 CC examples/blob/hello_world/hello_blob.o 00:03:21.905 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.905 CC test/accel/dif/dif.o 00:03:21.905 CC examples/nvmf/nvmf/nvmf.o 00:03:21.905 LINK lsvmd 00:03:21.905 LINK ioat_perf 00:03:22.164 LINK hello_world 00:03:22.164 LINK hello_sock 00:03:22.164 LINK accel_perf 00:03:22.164 LINK hello_blob 00:03:22.164 LINK hello_bdev 00:03:22.164 LINK dif 00:03:22.164 LINK nvmf 00:03:22.164 CC examples/ioat/verify/verify.o 00:03:22.164 CC examples/vmd/led/led.o 00:03:22.164 CC examples/nvme/reconnect/reconnect.o 00:03:22.164 CC examples/blob/cli/blobcli.o 00:03:22.164 CC app/trace_record/trace_record.o 00:03:22.164 LINK led 00:03:22.164 LINK verify 00:03:22.164 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:22.164 LINK spdk_trace_record 00:03:22.164 CC examples/nvme/arbitration/arbitration.o 00:03:22.164 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.164 LINK reconnect 00:03:22.424 CC examples/nvme/hotplug/hotplug.o 00:03:22.424 CC test/app/bdev_svc/bdev_svc.o 00:03:22.424 LINK blobcli 00:03:22.424 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.424 LINK spdk_trace 00:03:22.424 LINK nvme_manage 00:03:22.424 CC test/app/histogram_perf/histogram_perf.o 00:03:22.424 CC examples/util/zipf/zipf.o 00:03:22.424 LINK arbitration 00:03:22.424 LINK bdev_svc 00:03:22.424 LINK hotplug 00:03:22.424 CC app/nvmf_tgt/nvmf_main.o 00:03:22.424 LINK zipf 00:03:22.424 LINK histogram_perf 00:03:22.424 LINK nvme_fuzz 00:03:22.424 LINK bdevperf 00:03:22.424 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:22.424 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.424 LINK nvmf_tgt 00:03:22.424 CC test/bdev/bdevio/bdevio.o 00:03:22.424 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.424 CC app/spdk_tgt/spdk_tgt.o 00:03:22.684 CC test/blobfs/mkfs/mkfs.o 00:03:22.684 CC test/app/jsoncat/jsoncat.o 00:03:22.684 CC examples/thread/thread/thread_ex.o 00:03:22.684 CC examples/idxd/perf/perf.o 00:03:22.684 LINK cmb_copy 00:03:22.684 LINK iscsi_tgt 00:03:22.684 CC test/app/stub/stub.o 00:03:22.684 LINK jsoncat 00:03:22.684 LINK spdk_tgt 00:03:22.684 LINK bdevio 00:03:22.684 LINK mkfs 00:03:22.684 LINK idxd_perf 00:03:22.684 LINK thread 00:03:22.684 CC examples/nvme/abort/abort.o 00:03:22.684 LINK stub 00:03:22.684 CC app/spdk_lspci/spdk_lspci.o 00:03:22.684 TEST_HEADER include/spdk/accel.h 00:03:22.684 TEST_HEADER include/spdk/accel_module.h 00:03:22.684 TEST_HEADER include/spdk/assert.h 00:03:22.684 TEST_HEADER include/spdk/barrier.h 00:03:22.684 TEST_HEADER include/spdk/base64.h 00:03:22.684 TEST_HEADER include/spdk/bdev.h 00:03:22.684 TEST_HEADER include/spdk/bdev_module.h 00:03:22.684 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.684 TEST_HEADER include/spdk/bit_array.h 00:03:22.684 TEST_HEADER include/spdk/bit_pool.h 00:03:22.684 TEST_HEADER include/spdk/blob.h 00:03:22.684 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.685 TEST_HEADER include/spdk/blobfs.h 00:03:22.685 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.685 TEST_HEADER include/spdk/conf.h 00:03:22.685 TEST_HEADER include/spdk/config.h 00:03:22.685 CC app/spdk_nvme_perf/perf.o 00:03:22.685 LINK spdk_lspci 00:03:22.685 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.685 TEST_HEADER include/spdk/cpuset.h 00:03:22.685 TEST_HEADER include/spdk/crc16.h 00:03:22.685 TEST_HEADER include/spdk/crc32.h 00:03:22.685 TEST_HEADER include/spdk/crc64.h 00:03:22.685 TEST_HEADER include/spdk/dif.h 00:03:22.685 CC app/spdk_nvme_identify/identify.o 00:03:22.685 TEST_HEADER include/spdk/dma.h 00:03:22.685 TEST_HEADER include/spdk/endian.h 00:03:22.685 TEST_HEADER include/spdk/env.h 00:03:22.685 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.685 TEST_HEADER include/spdk/event.h 00:03:22.685 TEST_HEADER include/spdk/fd.h 00:03:22.685 LINK iscsi_fuzz 00:03:22.685 TEST_HEADER include/spdk/fd_group.h 00:03:22.685 TEST_HEADER include/spdk/file.h 00:03:22.685 TEST_HEADER include/spdk/ftl.h 00:03:22.685 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.685 TEST_HEADER include/spdk/hexlify.h 00:03:22.685 TEST_HEADER include/spdk/histogram_data.h 00:03:22.685 LINK abort 00:03:22.685 TEST_HEADER include/spdk/idxd.h 00:03:22.685 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.685 TEST_HEADER include/spdk/init.h 00:03:22.685 TEST_HEADER include/spdk/ioat.h 00:03:22.945 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.945 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.945 TEST_HEADER include/spdk/json.h 00:03:22.945 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.945 TEST_HEADER include/spdk/likely.h 00:03:22.945 TEST_HEADER include/spdk/log.h 00:03:22.945 TEST_HEADER include/spdk/lvol.h 00:03:22.945 TEST_HEADER include/spdk/memory.h 00:03:22.945 TEST_HEADER include/spdk/mmio.h 00:03:22.945 TEST_HEADER include/spdk/nbd.h 00:03:22.945 TEST_HEADER include/spdk/notify.h 00:03:22.945 TEST_HEADER include/spdk/nvme.h 00:03:22.945 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.945 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.945 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.945 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.945 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.945 TEST_HEADER include/spdk/nvmf.h 00:03:22.945 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.945 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.945 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.945 LINK pmr_persistence 00:03:22.945 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.945 TEST_HEADER include/spdk/opal.h 00:03:22.945 TEST_HEADER include/spdk/opal_spec.h 00:03:22.945 TEST_HEADER include/spdk/pci_ids.h 00:03:22.945 TEST_HEADER include/spdk/pipe.h 00:03:22.945 TEST_HEADER include/spdk/queue.h 00:03:22.945 TEST_HEADER include/spdk/reduce.h 00:03:22.945 TEST_HEADER include/spdk/rpc.h 00:03:22.945 TEST_HEADER include/spdk/scheduler.h 00:03:22.945 TEST_HEADER include/spdk/scsi.h 00:03:22.945 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.945 TEST_HEADER include/spdk/sock.h 00:03:22.945 CC test/dma/test_dma/test_dma.o 00:03:22.945 TEST_HEADER include/spdk/stdinc.h 00:03:22.945 TEST_HEADER include/spdk/string.h 00:03:22.945 TEST_HEADER include/spdk/thread.h 00:03:22.945 TEST_HEADER include/spdk/trace.h 00:03:22.945 TEST_HEADER include/spdk/trace_parser.h 00:03:22.945 CC test/event/event_perf/event_perf.o 00:03:22.945 TEST_HEADER include/spdk/tree.h 00:03:22.945 TEST_HEADER include/spdk/ublk.h 00:03:22.945 gmake[2]: Nothing to be done for 'all'. 00:03:22.945 TEST_HEADER include/spdk/util.h 00:03:22.945 TEST_HEADER include/spdk/uuid.h 00:03:22.945 TEST_HEADER include/spdk/version.h 00:03:22.946 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.946 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.946 TEST_HEADER include/spdk/vhost.h 00:03:22.946 TEST_HEADER include/spdk/vmd.h 00:03:22.946 TEST_HEADER include/spdk/xor.h 00:03:22.946 TEST_HEADER include/spdk/zipf.h 00:03:22.946 CXX test/cpp_headers/accel.o 00:03:22.946 CC test/env/mem_callbacks/mem_callbacks.o 00:03:22.946 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.946 CC test/event/reactor/reactor.o 00:03:22.946 CC app/spdk_top/spdk_top.o 00:03:22.946 LINK spdk_nvme_perf 00:03:22.946 LINK event_perf 00:03:22.946 LINK spdk_nvme_identify 00:03:22.946 LINK reactor 00:03:22.946 CC test/nvme/aer/aer.o 00:03:22.946 CXX test/cpp_headers/accel_module.o 00:03:22.946 LINK spdk_nvme_discover 00:03:22.946 LINK test_dma 00:03:22.946 CC test/event/reactor_perf/reactor_perf.o 00:03:22.946 CXX test/cpp_headers/assert.o 00:03:22.946 CC app/fio/nvme/fio_plugin.o 00:03:22.946 CC test/nvme/reset/reset.o 00:03:22.946 LINK reactor_perf 00:03:22.946 LINK aer 00:03:22.946 CC test/rpc_client/rpc_client_test.o 00:03:23.205 CXX test/cpp_headers/barrier.o 00:03:23.205 LINK spdk_top 00:03:23.205 LINK rpc_client_test 00:03:23.205 CC test/thread/poller_perf/poller_perf.o 00:03:23.205 LINK reset 00:03:23.205 CC test/env/vtophys/vtophys.o 00:03:23.205 CC app/fio/bdev/fio_plugin.o 00:03:23.205 CC test/thread/lock/spdk_lock.o 00:03:23.205 LINK mem_callbacks 00:03:23.205 CXX test/cpp_headers/base64.o 00:03:23.205 LINK poller_perf 00:03:23.205 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.205 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:23.205 struct spdk_nvme_fdp_ruhs ruhs; 00:03:23.205 ^ 00:03:23.205 LINK vtophys 00:03:23.205 CC test/nvme/sgl/sgl.o 00:03:23.205 CXX test/cpp_headers/bdev.o 00:03:23.205 1 warning generated. 00:03:23.205 LINK spdk_nvme 00:03:23.205 CXX test/cpp_headers/bdev_module.o 00:03:23.205 LINK env_dpdk_post_init 00:03:23.205 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:23.205 LINK sgl 00:03:23.205 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:23.205 LINK spdk_bdev 00:03:23.465 LINK histogram_ut 00:03:23.465 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:23.465 CC test/env/memory/memory_ut.o 00:03:23.465 CXX test/cpp_headers/bdev_zone.o 00:03:23.465 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:23.465 LINK spdk_lock 00:03:23.465 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:23.465 CC test/nvme/e2edp/nvme_dp.o 00:03:23.465 CC test/nvme/overhead/overhead.o 00:03:23.465 CXX test/cpp_headers/bit_array.o 00:03:23.465 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:23.465 LINK nvme_dp 00:03:23.465 LINK overhead 00:03:23.465 CC test/env/pci/pci_ut.o 00:03:23.465 CXX test/cpp_headers/bit_pool.o 00:03:23.465 LINK blob_bdev_ut 00:03:23.465 LINK tree_ut 00:03:23.725 CXX test/cpp_headers/blob.o 00:03:23.725 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:23.725 CC test/nvme/err_injection/err_injection.o 00:03:23.725 LINK pci_ut 00:03:23.725 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:23.725 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:23.725 CXX test/cpp_headers/blob_bdev.o 00:03:23.725 LINK err_injection 00:03:23.725 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:23.725 LINK memory_ut 00:03:23.725 LINK accel_ut 00:03:23.725 LINK scsi_nvme_ut 00:03:23.725 CC test/nvme/startup/startup.o 00:03:23.725 CXX test/cpp_headers/blobfs.o 00:03:23.725 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:23.725 CC test/nvme/reserve/reserve.o 00:03:23.725 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:23.725 LINK startup 00:03:23.985 LINK blobfs_async_ut 00:03:23.985 LINK blobfs_bdev_ut 00:03:23.985 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.985 LINK reserve 00:03:23.985 LINK gpt_ut 00:03:23.985 CXX test/cpp_headers/conf.o 00:03:23.985 LINK part_ut 00:03:23.985 LINK blobfs_sync_ut 00:03:23.985 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:23.985 CC test/unit/lib/event/app.c/app_ut.o 00:03:23.985 CC test/nvme/connect_stress/connect_stress.o 00:03:23.985 CC test/nvme/simple_copy/simple_copy.o 00:03:23.985 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:23.985 CXX test/cpp_headers/config.o 00:03:23.985 CC test/nvme/boot_partition/boot_partition.o 00:03:23.985 CXX test/cpp_headers/cpuset.o 00:03:23.985 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:23.985 LINK connect_stress 00:03:23.985 LINK simple_copy 00:03:23.985 LINK dma_ut 00:03:23.985 LINK boot_partition 00:03:24.246 LINK app_ut 00:03:24.246 LINK ioat_ut 00:03:24.246 CXX test/cpp_headers/crc16.o 00:03:24.246 LINK bdev_ut 00:03:24.246 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:24.246 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:24.246 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:24.246 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:24.246 CC test/nvme/compliance/nvme_compliance.o 00:03:24.246 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:24.246 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.246 CXX test/cpp_headers/crc32.o 00:03:24.246 LINK fused_ordering 00:03:24.246 LINK init_grp_ut 00:03:24.246 LINK vbdev_lvol_ut 00:03:24.246 CXX test/cpp_headers/crc64.o 00:03:24.246 LINK nvme_compliance 00:03:24.246 CXX test/cpp_headers/dif.o 00:03:24.505 LINK reactor_ut 00:03:24.505 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:24.505 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:24.505 LINK conn_ut 00:03:24.506 CXX test/cpp_headers/dma.o 00:03:24.506 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.506 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:24.506 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:24.506 LINK bdev_zone_ut 00:03:24.506 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:24.506 LINK doorbell_aers 00:03:24.506 CXX test/cpp_headers/endian.o 00:03:24.506 LINK bdev_raid_ut 00:03:24.506 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:24.506 CC test/nvme/fdp/fdp.o 00:03:24.506 CXX test/cpp_headers/env.o 00:03:24.506 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:24.766 LINK vbdev_zone_block_ut 00:03:24.766 LINK jsonrpc_server_ut 00:03:24.766 LINK blob_ut 00:03:24.766 LINK fdp 00:03:24.766 CXX test/cpp_headers/env_dpdk.o 00:03:24.766 CC test/unit/lib/log/log.c/log_ut.o 00:03:24.766 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:24.766 LINK bdev_raid_sb_ut 00:03:24.766 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:24.766 CXX test/cpp_headers/event.o 00:03:24.766 LINK bdev_ut 00:03:24.766 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:24.766 LINK iscsi_ut 00:03:24.766 LINK log_ut 00:03:24.766 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:24.766 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:24.766 CXX test/cpp_headers/fd.o 00:03:24.766 LINK json_parse_ut 00:03:24.766 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:24.766 LINK concat_ut 00:03:24.766 LINK json_util_ut 00:03:25.026 CXX test/cpp_headers/fd_group.o 00:03:25.026 CXX test/cpp_headers/file.o 00:03:25.026 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:25.026 LINK param_ut 00:03:25.026 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:25.026 LINK raid1_ut 00:03:25.026 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:25.026 CXX test/cpp_headers/ftl.o 00:03:25.026 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:25.026 LINK portal_grp_ut 00:03:25.026 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:25.026 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:25.026 LINK notify_ut 00:03:25.026 LINK json_write_ut 00:03:25.026 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:25.026 CXX test/cpp_headers/gpt_spec.o 00:03:25.026 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:25.285 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:25.285 LINK tgt_node_ut 00:03:25.285 LINK dev_ut 00:03:25.285 CXX test/cpp_headers/hexlify.o 00:03:25.285 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:25.285 LINK bdev_nvme_ut 00:03:25.285 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:25.285 LINK lvol_ut 00:03:25.285 CXX test/cpp_headers/histogram_data.o 00:03:25.285 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:25.285 LINK lun_ut 00:03:25.285 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:25.285 LINK scsi_ut 00:03:25.285 CXX test/cpp_headers/idxd.o 00:03:25.285 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:25.544 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:25.544 LINK nvme_ut 00:03:25.544 CXX test/cpp_headers/idxd_spec.o 00:03:25.544 LINK ctrlr_ut 00:03:25.544 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:25.544 CXX test/cpp_headers/init.o 00:03:25.544 CXX test/cpp_headers/ioat.o 00:03:25.544 LINK tcp_ut 00:03:25.544 LINK nvme_ctrlr_cmd_ut 00:03:26.125 LINK scsi_pr_ut 00:03:26.125 LINK subsystem_ut 00:03:26.125 CXX test/cpp_headers/ioat_spec.o 00:03:26.125 LINK nvme_ns_ut 00:03:26.125 LINK scsi_bdev_ut 00:03:26.125 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:26.125 LINK nvme_ctrlr_ut 00:03:26.125 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:26.125 CXX test/cpp_headers/iscsi_spec.o 00:03:26.125 CXX test/cpp_headers/json.o 00:03:26.125 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:26.125 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:26.125 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:26.125 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:26.125 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:26.125 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:26.384 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:26.384 CXX test/cpp_headers/jsonrpc.o 00:03:26.384 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:26.384 LINK ctrlr_bdev_ut 00:03:26.384 CXX test/cpp_headers/likely.o 00:03:26.384 LINK nvmf_ut 00:03:26.384 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:26.384 LINK iobuf_ut 00:03:26.384 CXX test/cpp_headers/log.o 00:03:26.384 LINK ctrlr_discovery_ut 00:03:26.644 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:26.644 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:26.644 LINK thread_ut 00:03:26.644 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:26.644 LINK sock_ut 00:03:26.644 CXX test/cpp_headers/lvol.o 00:03:26.644 LINK base64_ut 00:03:26.644 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:26.644 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:26.644 LINK rdma_ut 00:03:26.644 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:26.644 CXX test/cpp_headers/memory.o 00:03:26.644 LINK bit_array_ut 00:03:26.644 LINK nvme_ns_cmd_ut 00:03:26.644 LINK nvme_ns_ocssd_cmd_ut 00:03:26.644 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:26.903 CXX test/cpp_headers/mmio.o 00:03:26.903 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:26.903 LINK posix_ut 00:03:26.903 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:26.903 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:26.903 LINK cpuset_ut 00:03:26.903 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:26.903 LINK nvme_poll_group_ut 00:03:26.903 CXX test/cpp_headers/nbd.o 00:03:26.903 CXX test/cpp_headers/notify.o 00:03:26.903 LINK pci_event_ut 00:03:26.903 CXX test/cpp_headers/nvme.o 00:03:26.903 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:26.903 LINK crc16_ut 00:03:26.903 LINK subsystem_ut 00:03:26.903 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:26.903 LINK nvme_pcie_ut 00:03:26.903 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:26.903 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:26.903 CXX test/cpp_headers/nvme_intel.o 00:03:26.903 LINK crc32_ieee_ut 00:03:26.903 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.163 LINK nvme_quirks_ut 00:03:27.163 LINK nvme_qpair_ut 00:03:27.163 LINK crc32c_ut 00:03:27.163 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:27.163 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:27.163 LINK transport_ut 00:03:27.163 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:27.163 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:27.163 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:27.163 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:27.163 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:27.163 LINK crc64_ut 00:03:27.163 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:27.163 LINK rpc_ut 00:03:27.163 LINK idxd_user_ut 00:03:27.163 CXX test/cpp_headers/nvme_spec.o 00:03:27.163 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:27.163 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:27.163 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:27.424 LINK common_ut 00:03:27.424 CXX test/cpp_headers/nvme_zns.o 00:03:27.424 LINK nvme_transport_ut 00:03:27.424 LINK iov_ut 00:03:27.424 CXX test/cpp_headers/nvmf.o 00:03:27.424 LINK idxd_ut 00:03:27.424 LINK dif_ut 00:03:27.424 CC test/unit/lib/util/math.c/math_ut.o 00:03:27.424 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:27.424 CXX test/cpp_headers/nvmf_cmd.o 00:03:27.424 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:27.424 LINK math_ut 00:03:27.424 LINK nvme_io_msg_ut 00:03:27.424 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:27.424 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:27.424 CC test/unit/lib/util/string.c/string_ut.o 00:03:27.424 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:27.424 LINK nvme_tcp_ut 00:03:27.424 LINK pipe_ut 00:03:27.684 CXX test/cpp_headers/nvmf_spec.o 00:03:27.684 CXX test/cpp_headers/nvmf_transport.o 00:03:27.684 LINK nvme_pcie_common_ut 00:03:27.684 CXX test/cpp_headers/opal.o 00:03:27.684 CXX test/cpp_headers/opal_spec.o 00:03:27.684 LINK nvme_opal_ut 00:03:27.684 LINK nvme_fabric_ut 00:03:27.684 CXX test/cpp_headers/pci_ids.o 00:03:27.684 LINK xor_ut 00:03:27.684 LINK string_ut 00:03:27.684 CXX test/cpp_headers/pipe.o 00:03:27.684 CXX test/cpp_headers/queue.o 00:03:27.684 CXX test/cpp_headers/reduce.o 00:03:27.684 CXX test/cpp_headers/rpc.o 00:03:27.684 CXX test/cpp_headers/scheduler.o 00:03:27.684 CXX test/cpp_headers/scsi.o 00:03:27.685 CXX test/cpp_headers/scsi_spec.o 00:03:27.685 CXX test/cpp_headers/sock.o 00:03:27.685 CXX test/cpp_headers/stdinc.o 00:03:27.685 CXX test/cpp_headers/string.o 00:03:27.685 CXX test/cpp_headers/thread.o 00:03:27.685 CXX test/cpp_headers/trace.o 00:03:27.685 CXX test/cpp_headers/trace_parser.o 00:03:27.685 CXX test/cpp_headers/tree.o 00:03:27.685 CXX test/cpp_headers/ublk.o 00:03:27.685 CXX test/cpp_headers/util.o 00:03:27.685 CXX test/cpp_headers/uuid.o 00:03:27.685 CXX test/cpp_headers/version.o 00:03:27.685 CXX test/cpp_headers/vfio_user_pci.o 00:03:27.685 CXX test/cpp_headers/vfio_user_spec.o 00:03:27.685 CXX test/cpp_headers/vhost.o 00:03:27.685 CXX test/cpp_headers/vmd.o 00:03:27.685 CXX test/cpp_headers/xor.o 00:03:27.945 CXX test/cpp_headers/zipf.o 00:03:27.945 LINK nvme_rdma_ut 00:03:27.945 00:03:27.945 real 1m2.492s 00:03:27.945 user 3m9.315s 00:03:27.945 sys 0m47.247s 00:03:27.945 05:55:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:27.945 05:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.945 ************************************ 00:03:27.945 END TEST unittest_build 00:03:27.945 ************************************ 00:03:28.205 05:55:36 -- spdk/autotest.sh@25 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.205 05:55:36 -- nvmf/common.sh@7 -- # uname -s 00:03:28.205 05:55:36 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:28.205 05:55:36 -- nvmf/common.sh@7 -- # return 0 00:03:28.205 05:55:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.205 05:55:36 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.205 05:55:36 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:28.205 05:55:36 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.205 05:55:36 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:28.205 05:55:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:28.205 05:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:28.205 05:55:36 -- spdk/autotest.sh@70 -- # create_test_list 00:03:28.205 05:55:36 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:28.205 05:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:28.205 05:55:36 -- spdk/autotest.sh@72 -- # dirname /usr/home/vagrant/spdk_repo/spdk/autotest.sh 00:03:28.205 05:55:36 -- spdk/autotest.sh@72 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk 00:03:28.205 05:55:36 -- spdk/autotest.sh@72 -- # src=/usr/home/vagrant/spdk_repo/spdk 00:03:28.205 05:55:36 -- spdk/autotest.sh@73 -- # out=/usr/home/vagrant/spdk_repo/spdk/../output 00:03:28.205 05:55:36 -- spdk/autotest.sh@74 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:03:28.205 05:55:36 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:28.205 05:55:36 -- common/autotest_common.sh@1440 -- # uname 00:03:28.205 05:55:36 -- common/autotest_common.sh@1440 -- # '[' FreeBSD = FreeBSD ']' 00:03:28.205 05:55:36 -- common/autotest_common.sh@1441 -- # kldunload contigmem.ko 00:03:28.465 kldunload: can't find file contigmem.ko 00:03:28.465 05:55:36 -- common/autotest_common.sh@1441 -- # true 00:03:28.465 05:55:36 -- common/autotest_common.sh@1442 -- # '[' -n '' ']' 00:03:28.465 05:55:36 -- common/autotest_common.sh@1448 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:28.465 05:55:36 -- common/autotest_common.sh@1449 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:28.465 05:55:36 -- common/autotest_common.sh@1450 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:28.465 05:55:36 -- common/autotest_common.sh@1451 -- # cp -f /usr/home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:28.465 05:55:36 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:28.465 05:55:36 -- common/autotest_common.sh@1460 -- # uname 00:03:28.465 05:55:36 -- common/autotest_common.sh@1460 -- # [[ FreeBSD = FreeBSD ]] 00:03:28.465 05:55:36 -- common/autotest_common.sh@1460 -- # sysctl -n kern.ipc.maxsockbuf 00:03:28.465 05:55:36 -- common/autotest_common.sh@1460 -- # (( 2097152 < 4194304 )) 00:03:28.465 05:55:36 -- common/autotest_common.sh@1461 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:28.465 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:28.465 05:55:36 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:28.465 05:55:36 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:03:28.465 05:55:36 -- spdk/autotest.sh@83 -- # hash lcov 00:03:28.465 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 83: hash: lcov: not found 00:03:28.465 05:55:36 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:28.465 05:55:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:28.465 05:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:28.465 05:55:36 -- spdk/autotest.sh@102 -- # rm -f 00:03:28.465 05:55:36 -- spdk/autotest.sh@105 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.465 kldunload: can't find file contigmem.ko 00:03:28.465 kldunload: can't find file nic_uio.ko 00:03:28.465 05:55:36 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:28.465 05:55:36 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:28.465 05:55:36 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:28.465 05:55:36 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:28.465 05:55:36 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:28.465 05:55:36 -- spdk/autotest.sh@121 -- # ls /dev/nvme0ns1 00:03:28.465 05:55:36 -- spdk/autotest.sh@121 -- # grep -v p 00:03:28.465 05:55:36 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:28.465 05:55:36 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:28.465 05:55:36 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0ns1 00:03:28.465 05:55:36 -- scripts/common.sh@380 -- # local block=/dev/nvme0ns1 pt 00:03:28.465 05:55:36 -- scripts/common.sh@389 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:28.465 nvme0ns1 is not a block device 00:03:28.465 05:55:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:28.465 /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh: line 393: blkid: command not found 00:03:28.465 05:55:36 -- scripts/common.sh@393 -- # pt= 00:03:28.465 05:55:36 -- scripts/common.sh@394 -- # return 1 00:03:28.465 05:55:36 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:28.465 1+0 records in 00:03:28.465 1+0 records out 00:03:28.465 1048576 bytes transferred in 0.008447 secs (124138992 bytes/sec) 00:03:28.465 05:55:36 -- spdk/autotest.sh@129 -- # sync 00:03:29.036 05:55:37 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:29.036 05:55:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:29.036 05:55:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:29.605 05:55:37 -- spdk/autotest.sh@135 -- # uname -s 00:03:29.605 05:55:37 -- spdk/autotest.sh@135 -- # '[' FreeBSD = Linux ']' 00:03:29.605 05:55:37 -- spdk/autotest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:29.605 Contigmem (not present) 00:03:29.605 Buffer Size: not set 00:03:29.605 Num Buffers: not set 00:03:29.605 00:03:29.605 00:03:29.605 Type BDF Vendor Device Driver 00:03:29.605 NVMe 0:0:6:0 0x1b36 0x0010 nvme0 00:03:29.605 05:55:37 -- spdk/autotest.sh@141 -- # uname -s 00:03:29.605 05:55:37 -- spdk/autotest.sh@141 -- # [[ FreeBSD == Linux ]] 00:03:29.605 05:55:37 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:29.605 05:55:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:29.605 05:55:37 -- common/autotest_common.sh@10 -- # set +x 00:03:29.605 05:55:37 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:29.605 05:55:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:29.605 05:55:37 -- common/autotest_common.sh@10 -- # set +x 00:03:29.605 05:55:37 -- spdk/autotest.sh@150 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:29.865 kldunload: can't find file nic_uio.ko 00:03:29.865 hw.nic_uio.bdfs="0:6:0" 00:03:29.865 hw.contigmem.num_buffers="8" 00:03:29.865 hw.contigmem.buffer_size="268435456" 00:03:30.436 05:55:38 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:30.436 05:55:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:30.436 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.436 05:55:38 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:30.436 05:55:38 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:30.436 05:55:38 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:30.436 05:55:38 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:30.436 05:55:38 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:30.436 05:55:38 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:30.436 05:55:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:30.436 05:55:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:30.436 05:55:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:30.436 05:55:38 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:30.436 05:55:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:30.436 05:55:38 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:30.436 05:55:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:03:30.436 05:55:38 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:30.436 05:55:38 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:30.436 cat: /sys/bus/pci/devices/0000:00:06.0/device: No such file or directory 00:03:30.436 05:55:38 -- common/autotest_common.sh@1565 -- # device= 00:03:30.436 05:55:38 -- common/autotest_common.sh@1565 -- # true 00:03:30.436 05:55:38 -- common/autotest_common.sh@1566 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:30.436 05:55:38 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:03:30.436 05:55:38 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:30.436 05:55:38 -- common/autotest_common.sh@1578 -- # return 0 00:03:30.436 05:55:38 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:03:30.436 05:55:38 -- spdk/autotest.sh@162 -- # run_test unittest /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:30.436 05:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.436 05:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.436 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.436 ************************************ 00:03:30.436 START TEST unittest 00:03:30.436 ************************************ 00:03:30.436 05:55:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:30.436 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:30.436 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.436 + testdir=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.436 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:30.436 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:30.436 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:30.436 + source /usr/home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:30.436 ++ rpc_py=rpc_cmd 00:03:30.436 ++ set -e 00:03:30.436 ++ shopt -s nullglob 00:03:30.436 ++ shopt -s extglob 00:03:30.436 ++ [[ -e /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:30.436 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:30.436 +++ CONFIG_WPDK_DIR= 00:03:30.436 +++ CONFIG_ASAN=n 00:03:30.436 +++ CONFIG_VBDEV_COMPRESS=n 00:03:30.436 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:30.436 +++ CONFIG_USDT=n 00:03:30.436 +++ CONFIG_CUSTOMOCF=n 00:03:30.436 +++ CONFIG_PREFIX=/usr/local 00:03:30.436 +++ CONFIG_RBD=n 00:03:30.436 +++ CONFIG_LIBDIR= 00:03:30.436 +++ CONFIG_IDXD=y 00:03:30.436 +++ CONFIG_NVME_CUSE=n 00:03:30.436 +++ CONFIG_SMA=n 00:03:30.436 +++ CONFIG_VTUNE=n 00:03:30.436 +++ CONFIG_TSAN=n 00:03:30.436 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:30.436 +++ CONFIG_VFIO_USER_DIR= 00:03:30.436 +++ CONFIG_PGO_CAPTURE=n 00:03:30.436 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:30.436 +++ CONFIG_ENV=/usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:30.436 +++ CONFIG_LTO=n 00:03:30.436 +++ CONFIG_ISCSI_INITIATOR=n 00:03:30.436 +++ CONFIG_CET=n 00:03:30.436 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:30.436 +++ CONFIG_OCF_PATH= 00:03:30.436 +++ CONFIG_RDMA_SET_TOS=y 00:03:30.436 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:30.436 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:30.436 +++ CONFIG_UBLK=n 00:03:30.436 +++ CONFIG_ISAL_CRYPTO=y 00:03:30.436 +++ CONFIG_OPENSSL_PATH= 00:03:30.436 +++ CONFIG_OCF=n 00:03:30.436 +++ CONFIG_FUSE=n 00:03:30.436 +++ CONFIG_VTUNE_DIR= 00:03:30.436 +++ CONFIG_FUZZER_LIB= 00:03:30.436 +++ CONFIG_FUZZER=n 00:03:30.436 +++ CONFIG_DPDK_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:30.436 +++ CONFIG_CRYPTO=n 00:03:30.436 +++ CONFIG_PGO_USE=n 00:03:30.436 +++ CONFIG_VHOST=n 00:03:30.436 +++ CONFIG_DAOS=n 00:03:30.436 +++ CONFIG_DPDK_INC_DIR= 00:03:30.436 +++ CONFIG_DAOS_DIR= 00:03:30.436 +++ CONFIG_UNIT_TESTS=y 00:03:30.436 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:30.436 +++ CONFIG_VIRTIO=n 00:03:30.436 +++ CONFIG_COVERAGE=n 00:03:30.436 +++ CONFIG_RDMA=y 00:03:30.436 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:30.436 +++ CONFIG_URING_PATH= 00:03:30.436 +++ CONFIG_XNVME=n 00:03:30.436 +++ CONFIG_VFIO_USER=n 00:03:30.436 +++ CONFIG_ARCH=native 00:03:30.436 +++ CONFIG_URING_ZNS=n 00:03:30.436 +++ CONFIG_WERROR=y 00:03:30.436 +++ CONFIG_HAVE_LIBBSD=n 00:03:30.436 +++ CONFIG_UBSAN=n 00:03:30.436 +++ CONFIG_IPSEC_MB_DIR= 00:03:30.436 +++ CONFIG_GOLANG=n 00:03:30.436 +++ CONFIG_ISAL=y 00:03:30.436 +++ CONFIG_IDXD_KERNEL=n 00:03:30.436 +++ CONFIG_DPDK_LIB_DIR= 00:03:30.436 +++ CONFIG_RDMA_PROV=verbs 00:03:30.436 +++ CONFIG_APPS=y 00:03:30.436 +++ CONFIG_SHARED=n 00:03:30.436 +++ CONFIG_FC_PATH= 00:03:30.436 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:30.436 +++ CONFIG_FC=n 00:03:30.436 +++ CONFIG_AVAHI=n 00:03:30.436 +++ CONFIG_FIO_PLUGIN=y 00:03:30.436 +++ CONFIG_RAID5F=n 00:03:30.436 +++ CONFIG_EXAMPLES=y 00:03:30.436 +++ CONFIG_TESTS=y 00:03:30.436 +++ CONFIG_CRYPTO_MLX5=n 00:03:30.436 +++ CONFIG_MAX_LCORES= 00:03:30.436 +++ CONFIG_IPSEC_MB=n 00:03:30.436 +++ CONFIG_DEBUG=y 00:03:30.436 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:30.436 +++ CONFIG_CROSS_PREFIX= 00:03:30.436 +++ CONFIG_URING=n 00:03:30.436 ++ source /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:30.436 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:30.436 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common 00:03:30.436 +++ _root=/usr/home/vagrant/spdk_repo/spdk/test/common 00:03:30.436 +++ _root=/usr/home/vagrant/spdk_repo/spdk 00:03:30.436 +++ _app_dir=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:30.436 +++ _test_app_dir=/usr/home/vagrant/spdk_repo/spdk/test/app 00:03:30.436 +++ _examples_dir=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:30.436 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:30.436 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:30.436 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:30.436 +++ VHOST_APP=("$_app_dir/vhost") 00:03:30.437 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:30.437 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:30.437 +++ [[ -e /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:30.437 +++ [[ #ifndef SPDK_CONFIG_H 00:03:30.437 #define SPDK_CONFIG_H 00:03:30.437 #define SPDK_CONFIG_APPS 1 00:03:30.437 #define SPDK_CONFIG_ARCH native 00:03:30.437 #undef SPDK_CONFIG_ASAN 00:03:30.437 #undef SPDK_CONFIG_AVAHI 00:03:30.437 #undef SPDK_CONFIG_CET 00:03:30.437 #undef SPDK_CONFIG_COVERAGE 00:03:30.437 #define SPDK_CONFIG_CROSS_PREFIX 00:03:30.437 #undef SPDK_CONFIG_CRYPTO 00:03:30.437 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:30.437 #undef SPDK_CONFIG_CUSTOMOCF 00:03:30.437 #undef SPDK_CONFIG_DAOS 00:03:30.437 #define SPDK_CONFIG_DAOS_DIR 00:03:30.437 #define SPDK_CONFIG_DEBUG 1 00:03:30.437 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:30.437 #define SPDK_CONFIG_DPDK_DIR /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:30.437 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:30.437 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:30.437 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:30.437 #define SPDK_CONFIG_ENV /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:30.437 #define SPDK_CONFIG_EXAMPLES 1 00:03:30.437 #undef SPDK_CONFIG_FC 00:03:30.437 #define SPDK_CONFIG_FC_PATH 00:03:30.437 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:30.437 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:30.437 #undef SPDK_CONFIG_FUSE 00:03:30.437 #undef SPDK_CONFIG_FUZZER 00:03:30.437 #define SPDK_CONFIG_FUZZER_LIB 00:03:30.437 #undef SPDK_CONFIG_GOLANG 00:03:30.437 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:30.437 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:30.437 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:30.437 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:30.437 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:30.437 #define SPDK_CONFIG_IDXD 1 00:03:30.437 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:30.437 #undef SPDK_CONFIG_IPSEC_MB 00:03:30.437 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:30.437 #define SPDK_CONFIG_ISAL 1 00:03:30.437 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:30.437 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:30.437 #define SPDK_CONFIG_LIBDIR 00:03:30.437 #undef SPDK_CONFIG_LTO 00:03:30.437 #define SPDK_CONFIG_MAX_LCORES 00:03:30.437 #undef SPDK_CONFIG_NVME_CUSE 00:03:30.437 #undef SPDK_CONFIG_OCF 00:03:30.437 #define SPDK_CONFIG_OCF_PATH 00:03:30.437 #define SPDK_CONFIG_OPENSSL_PATH 00:03:30.437 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:30.437 #undef SPDK_CONFIG_PGO_USE 00:03:30.437 #define SPDK_CONFIG_PREFIX /usr/local 00:03:30.437 #undef SPDK_CONFIG_RAID5F 00:03:30.437 #undef SPDK_CONFIG_RBD 00:03:30.437 #define SPDK_CONFIG_RDMA 1 00:03:30.437 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:30.437 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:30.437 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:30.437 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:30.437 #undef SPDK_CONFIG_SHARED 00:03:30.437 #undef SPDK_CONFIG_SMA 00:03:30.437 #define SPDK_CONFIG_TESTS 1 00:03:30.437 #undef SPDK_CONFIG_TSAN 00:03:30.437 #undef SPDK_CONFIG_UBLK 00:03:30.437 #undef SPDK_CONFIG_UBSAN 00:03:30.437 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:30.437 #undef SPDK_CONFIG_URING 00:03:30.437 #define SPDK_CONFIG_URING_PATH 00:03:30.437 #undef SPDK_CONFIG_URING_ZNS 00:03:30.437 #undef SPDK_CONFIG_USDT 00:03:30.437 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:30.437 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:30.437 #undef SPDK_CONFIG_VFIO_USER 00:03:30.437 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:30.437 #undef SPDK_CONFIG_VHOST 00:03:30.437 #undef SPDK_CONFIG_VIRTIO 00:03:30.437 #undef SPDK_CONFIG_VTUNE 00:03:30.437 #define SPDK_CONFIG_VTUNE_DIR 00:03:30.437 #define SPDK_CONFIG_WERROR 1 00:03:30.437 #define SPDK_CONFIG_WPDK_DIR 00:03:30.437 #undef SPDK_CONFIG_XNVME 00:03:30.437 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:30.437 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:30.437 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:30.437 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:30.437 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.437 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.437 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:30.437 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:30.437 ++++ export PATH 00:03:30.437 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:30.437 ++ source /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:30.437 +++++ dirname /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:30.437 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:30.437 +++ _pmdir=/usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:30.437 ++++ readlink -f /usr/home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:30.437 +++ _pmrootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:30.437 +++ TEST_TAG=N/A 00:03:30.437 +++ TEST_TAG_FILE=/usr/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:30.437 ++ : 1 00:03:30.437 ++ export RUN_NIGHTLY 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_RUN_VALGRIND 00:03:30.437 ++ : 1 00:03:30.437 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:30.437 ++ : 1 00:03:30.437 ++ export SPDK_TEST_UNITTEST 00:03:30.437 ++ : 00:03:30.437 ++ export SPDK_TEST_AUTOBUILD 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_RELEASE_BUILD 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_ISAL 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_ISCSI 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:30.437 ++ : 1 00:03:30.437 ++ export SPDK_TEST_NVME 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVME_PMR 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVME_BP 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVME_CLI 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVME_CUSE 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVME_FDP 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVMF 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_VFIOUSER 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_FUZZER 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_FUZZER_SHORT 00:03:30.437 ++ : rdma 00:03:30.437 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_RBD 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_VHOST 00:03:30.437 ++ : 1 00:03:30.437 ++ export SPDK_TEST_BLOCKDEV 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_IOAT 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_BLOBFS 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_VHOST_INIT 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_LVOL 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_RUN_ASAN 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_RUN_UBSAN 00:03:30.437 ++ : 00:03:30.437 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_RUN_NON_ROOT 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_CRYPTO 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_FTL 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_OCF 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_VMD 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_OPAL 00:03:30.437 ++ : 00:03:30.437 ++ export SPDK_TEST_NATIVE_DPDK 00:03:30.437 ++ : true 00:03:30.437 ++ export SPDK_AUTOTEST_X 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_RAID5 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_URING 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_USDT 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_USE_IGB_UIO 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_SCHEDULER 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_SCANBUILD 00:03:30.437 ++ : 00:03:30.437 ++ export SPDK_TEST_NVMF_NICS 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_SMA 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_DAOS 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_XNVME 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_ACCEL_DSA 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_ACCEL_IAA 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_ACCEL_IOAT 00:03:30.437 ++ : 00:03:30.437 ++ export SPDK_TEST_FUZZER_TARGET 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_TEST_NVMF_MDNS 00:03:30.437 ++ : 0 00:03:30.437 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:30.437 ++ export SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:03:30.437 ++ SPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/lib 00:03:30.437 ++ export DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:30.437 ++ DPDK_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:30.437 ++ export VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.437 ++ VFIO_LIB_DIR=/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.437 ++ export LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.437 ++ LD_LIBRARY_PATH=:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/usr/home/vagrant/spdk_repo/spdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/usr/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:30.437 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:30.437 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:30.437 ++ export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:03:30.437 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:03:30.437 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:30.438 ++ PYTHONDONTWRITEBYTECODE=1 00:03:30.438 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:30.438 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:30.438 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:30.438 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:30.438 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:30.438 ++ rm -rf /var/tmp/asan_suppression_file 00:03:30.438 ++ cat 00:03:30.438 ++ echo leak:libfuse3.so 00:03:30.438 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:30.438 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:30.438 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:30.438 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:30.438 ++ '[' -z /var/spdk/dependencies ']' 00:03:30.438 ++ export DEPENDENCY_DIR 00:03:30.438 ++ export SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:30.438 ++ SPDK_BIN_DIR=/usr/home/vagrant/spdk_repo/spdk/build/bin 00:03:30.438 ++ export SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:30.438 ++ SPDK_EXAMPLE_DIR=/usr/home/vagrant/spdk_repo/spdk/build/examples 00:03:30.438 ++ export QEMU_BIN= 00:03:30.438 ++ QEMU_BIN= 00:03:30.438 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:30.438 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:30.438 ++ export AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:30.438 ++ AR_TOOL=/usr/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:30.438 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:30.438 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:30.438 ++ '[' 0 -eq 0 ']' 00:03:30.438 ++ export valgrind= 00:03:30.438 ++ valgrind= 00:03:30.438 +++ uname -s 00:03:30.438 ++ '[' FreeBSD = Linux ']' 00:03:30.438 +++ uname -s 00:03:30.438 ++ '[' FreeBSD = FreeBSD ']' 00:03:30.438 ++ MAKE=gmake 00:03:30.438 +++ sysctl -a 00:03:30.438 +++ grep -E -i hw.ncpu 00:03:30.438 +++ awk '{print $2}' 00:03:30.698 ++ MAKEFLAGS=-j10 00:03:30.698 ++ HUGEMEM=2048 00:03:30.698 ++ export HUGEMEM=2048 00:03:30.698 ++ HUGEMEM=2048 00:03:30.698 ++ '[' -z /usr/home/vagrant/spdk_repo/spdk/../output ']' 00:03:30.698 ++ NO_HUGE=() 00:03:30.698 ++ TEST_MODE= 00:03:30.698 ++ [[ -z '' ]] 00:03:30.698 ++ PYTHONPATH+=:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:30.698 ++ exec 00:03:30.698 ++ PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:30.698 ++ /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:30.698 ++ set_test_storage 2147483648 00:03:30.698 ++ [[ -v testdir ]] 00:03:30.698 ++ local requested_size=2147483648 00:03:30.698 ++ local mount target_dir 00:03:30.698 ++ local -A mounts fss sizes avails uses 00:03:30.698 ++ local source fs size avail mount use 00:03:30.698 ++ local storage_fallback storage_candidates 00:03:30.698 +++ mktemp -udt spdk.XXXXXX 00:03:30.698 ++ storage_fallback=/tmp/spdk.XXXXXX.7nPUr6SH 00:03:30.698 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:30.698 ++ [[ -n '' ]] 00:03:30.698 ++ [[ -n '' ]] 00:03:30.698 ++ mkdir -p /usr/home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.7nPUr6SH/tests/unit /tmp/spdk.XXXXXX.7nPUr6SH 00:03:30.698 ++ requested_size=2214592512 00:03:30.698 ++ read -r source fs size use avail _ mount 00:03:30.698 +++ df -T 00:03:30.698 +++ grep -v Filesystem 00:03:30.698 ++ mounts["$mount"]=/dev/gptid/bd0c1ea5-f644-11ee-93e1-001e672be6d6 00:03:30.698 ++ fss["$mount"]=ufs 00:03:30.698 ++ avails["$mount"]=17246883840 00:03:30.698 ++ sizes["$mount"]=31182712832 00:03:30.698 ++ uses["$mount"]=11441213440 00:03:30.698 ++ read -r source fs size use avail _ mount 00:03:30.698 ++ mounts["$mount"]=devfs 00:03:30.698 ++ fss["$mount"]=devfs 00:03:30.698 ++ avails["$mount"]=0 00:03:30.698 ++ sizes["$mount"]=1024 00:03:30.698 ++ uses["$mount"]=1024 00:03:30.698 ++ read -r source fs size use avail _ mount 00:03:30.698 ++ mounts["$mount"]=tmpfs 00:03:30.698 ++ fss["$mount"]=tmpfs 00:03:30.698 ++ avails["$mount"]=2147463168 00:03:30.698 ++ sizes["$mount"]=2147483648 00:03:30.698 ++ uses["$mount"]=20480 00:03:30.698 ++ read -r source fs size use avail _ mount 00:03:30.698 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd13-libvirt/output 00:03:30.698 ++ fss["$mount"]=fusefs.sshfs 00:03:30.698 ++ avails["$mount"]=97509642240 00:03:30.699 ++ sizes["$mount"]=105088212992 00:03:30.699 ++ uses["$mount"]=2193137664 00:03:30.699 ++ read -r source fs size use avail _ mount 00:03:30.699 ++ printf '* Looking for test storage...\n' 00:03:30.699 * Looking for test storage... 00:03:30.699 ++ local target_space new_size 00:03:30.699 ++ for target_dir in "${storage_candidates[@]}" 00:03:30.699 +++ df /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.699 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:30.699 ++ mount=/ 00:03:30.699 ++ target_space=17246883840 00:03:30.699 ++ (( target_space == 0 || target_space < requested_size )) 00:03:30.699 ++ (( target_space >= requested_size )) 00:03:30.699 ++ [[ ufs == tmpfs ]] 00:03:30.699 ++ [[ ufs == ramfs ]] 00:03:30.699 ++ [[ / == / ]] 00:03:30.699 ++ new_size=13655805952 00:03:30.699 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:30.699 ++ export SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.699 ++ SPDK_TEST_STORAGE=/usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.699 ++ printf '* Found test storage at %s\n' /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.699 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/unit 00:03:30.699 ++ return 0 00:03:30.699 ++ set -o errtrace 00:03:30.699 ++ shopt -s extdebug 00:03:30.699 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:30.699 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:30.699 05:55:38 -- common/autotest_common.sh@1672 -- # true 00:03:30.699 05:55:38 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:03:30.699 05:55:38 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:30.699 05:55:38 -- common/autotest_common.sh@29 -- # exec 00:03:30.699 05:55:38 -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:30.699 05:55:38 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:30.699 05:55:38 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:30.699 05:55:38 -- common/autotest_common.sh@18 -- # set -x 00:03:30.699 05:55:38 -- unit/unittest.sh@17 -- # cd /usr/home/vagrant/spdk_repo/spdk 00:03:30.699 05:55:38 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:03:30.699 05:55:38 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:03:30.699 05:55:38 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:03:30.699 05:55:38 -- unit/unittest.sh@178 -- # grep CC_TYPE /usr/home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:30.699 05:55:38 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=clang 00:03:30.699 05:55:38 -- unit/unittest.sh@179 -- # hash lcov 00:03:30.699 /usr/home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 179: hash: lcov: not found 00:03:30.699 05:55:38 -- unit/unittest.sh@182 -- # cov_avail=no 00:03:30.699 05:55:38 -- unit/unittest.sh@184 -- # '[' no = yes ']' 00:03:30.699 05:55:38 -- unit/unittest.sh@206 -- # uname -m 00:03:30.699 05:55:38 -- unit/unittest.sh@206 -- # '[' amd64 = aarch64 ']' 00:03:30.699 05:55:38 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:30.699 05:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.699 05:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.699 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.699 ************************************ 00:03:30.699 START TEST unittest_pci_event 00:03:30.699 ************************************ 00:03:30.699 05:55:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:30.699 00:03:30.699 00:03:30.699 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.699 http://cunit.sourceforge.net/ 00:03:30.699 00:03:30.699 00:03:30.699 Suite: pci_event 00:03:30.699 Test: test_pci_parse_event ...passed 00:03:30.699 00:03:30.699 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.699 suites 1 1 n/a 0 0 00:03:30.699 tests 1 1 1 0 0 00:03:30.699 asserts 1 1 1 0 n/a 00:03:30.699 00:03:30.699 Elapsed time = 0.000 seconds 00:03:30.699 00:03:30.699 real 0m0.031s 00:03:30.699 user 0m0.009s 00:03:30.699 sys 0m0.008s 00:03:30.699 05:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.699 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.699 ************************************ 00:03:30.699 END TEST unittest_pci_event 00:03:30.699 ************************************ 00:03:30.699 05:55:38 -- unit/unittest.sh@211 -- # run_test unittest_include /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:30.699 05:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.699 05:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.699 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.699 ************************************ 00:03:30.699 START TEST unittest_include 00:03:30.699 ************************************ 00:03:30.699 05:55:38 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:30.699 00:03:30.699 00:03:30.699 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.699 http://cunit.sourceforge.net/ 00:03:30.699 00:03:30.699 00:03:30.699 Suite: histogram 00:03:30.699 Test: histogram_test ...passed 00:03:30.699 Test: histogram_merge ...passed 00:03:30.699 00:03:30.699 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.699 suites 1 1 n/a 0 0 00:03:30.699 tests 2 2 2 0 0 00:03:30.699 asserts 50 50 50 0 n/a 00:03:30.699 00:03:30.699 Elapsed time = 0.008 seconds 00:03:30.699 00:03:30.699 real 0m0.011s 00:03:30.699 user 0m0.002s 00:03:30.699 sys 0m0.009s 00:03:30.699 05:55:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.699 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.699 ************************************ 00:03:30.699 END TEST unittest_include 00:03:30.699 ************************************ 00:03:30.699 05:55:38 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:03:30.699 05:55:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.699 05:55:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.699 05:55:38 -- common/autotest_common.sh@10 -- # set +x 00:03:30.699 ************************************ 00:03:30.699 START TEST unittest_bdev 00:03:30.699 ************************************ 00:03:30.699 05:55:38 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:03:30.699 05:55:38 -- unit/unittest.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:30.699 00:03:30.699 00:03:30.699 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.699 http://cunit.sourceforge.net/ 00:03:30.699 00:03:30.699 00:03:30.699 Suite: bdev 00:03:30.699 Test: bytes_to_blocks_test ...passed 00:03:30.699 Test: num_blocks_test ...passed 00:03:30.699 Test: io_valid_test ...passed 00:03:30.699 Test: open_write_test ...[2024-05-13 05:55:38.979296] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:30.699 [2024-05-13 05:55:38.979679] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:30.699 [2024-05-13 05:55:38.979706] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:30.699 passed 00:03:30.699 Test: claim_test ...passed 00:03:30.699 Test: alias_add_del_test ...[2024-05-13 05:55:38.983849] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:30.699 [2024-05-13 05:55:38.983885] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:30.699 [2024-05-13 05:55:38.983902] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:30.699 passed 00:03:30.699 Test: get_device_stat_test ...passed 00:03:30.699 Test: bdev_io_types_test ...passed 00:03:30.699 Test: bdev_io_wait_test ...passed 00:03:30.699 Test: bdev_io_spans_split_test ...passed 00:03:30.699 Test: bdev_io_boundary_split_test ...passed 00:03:30.699 Test: bdev_io_max_size_and_segment_split_test ...[2024-05-13 05:55:38.991249] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:30.699 passed 00:03:30.699 Test: bdev_io_mix_split_test ...passed 00:03:30.699 Test: bdev_io_split_with_io_wait ...passed 00:03:30.699 Test: bdev_io_write_unit_split_test ...[2024-05-13 05:55:38.995856] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:30.699 [2024-05-13 05:55:38.995894] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:30.699 [2024-05-13 05:55:38.995917] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:30.699 [2024-05-13 05:55:38.995936] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2743:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:30.699 passed 00:03:30.699 Test: bdev_io_alignment_with_boundary ...passed 00:03:30.699 Test: bdev_io_alignment ...passed 00:03:30.699 Test: bdev_histograms ...passed 00:03:30.699 Test: bdev_write_zeroes ...passed 00:03:30.699 Test: bdev_compare_and_write ...passed 00:03:30.699 Test: bdev_compare ...passed 00:03:30.699 Test: bdev_compare_emulated ...passed 00:03:30.699 Test: bdev_zcopy_write ...passed 00:03:30.699 Test: bdev_zcopy_read ...passed 00:03:30.699 Test: bdev_open_while_hotremove ...passed 00:03:30.699 Test: bdev_close_while_hotremove ...passed 00:03:30.699 Test: bdev_open_ext_test ...passed 00:03:30.699 Test: bdev_open_ext_unregister ...[2024-05-13 05:55:39.007745] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:30.699 [2024-05-13 05:55:39.007777] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:30.699 passed 00:03:30.699 Test: bdev_set_io_timeout ...passed 00:03:30.699 Test: bdev_set_qd_sampling ...passed 00:03:30.699 Test: lba_range_overlap ...passed 00:03:30.699 Test: lock_lba_range_check_ranges ...passed 00:03:30.961 Test: lock_lba_range_with_io_outstanding ...passed 00:03:30.961 Test: lock_lba_range_overlapped ...passed 00:03:30.961 Test: bdev_quiesce ...[2024-05-13 05:55:39.012415] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:30.961 passed 00:03:30.961 Test: bdev_io_abort ...passed 00:03:30.961 Test: bdev_unmap ...passed 00:03:30.961 Test: bdev_write_zeroes_split_test ...passed 00:03:30.961 Test: bdev_set_options_test ...passed 00:03:30.961 Test: bdev_get_memory_domains ...passed 00:03:30.961 Test: bdev_io_ext ...[2024-05-13 05:55:39.015009] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:30.961 passed 00:03:30.961 Test: bdev_io_ext_no_opts ...passed 00:03:30.961 Test: bdev_io_ext_invalid_opts ...passed 00:03:30.961 Test: bdev_io_ext_split ...passed 00:03:30.961 Test: bdev_io_ext_bounce_buffer ...passed 00:03:30.961 Test: bdev_register_uuid_alias ...[2024-05-13 05:55:39.019840] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 6cc94656-10ed-11ef-ba60-3508ead7bdda already exists 00:03:30.961 [2024-05-13 05:55:39.019859] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:6cc94656-10ed-11ef-ba60-3508ead7bdda alias for bdev bdev0 00:03:30.961 passed 00:03:30.961 Test: bdev_unregister_by_name ...[2024-05-13 05:55:39.020065] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:30.961 [2024-05-13 05:55:39.020072] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7840:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:30.961 passed 00:03:30.961 Test: for_each_bdev_test ...passed 00:03:30.961 Test: bdev_seek_test ...passed 00:03:30.961 Test: bdev_copy ...passed 00:03:30.961 Test: bdev_copy_split_test ...passed 00:03:30.961 Test: examine_locks ...passed 00:03:30.961 Test: claim_v2_rwo ...[2024-05-13 05:55:39.022590] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022601] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022606] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.961 passed 00:03:30.961 Test: claim_v2_rom ...passed 00:03:30.961 Test: claim_v2_rwm ...[2024-05-13 05:55:39.022612] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022617] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022627] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8561:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:30.961 [2024-05-13 05:55:39.022643] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022648] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022654] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022659] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022667] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:30.961 [2024-05-13 05:55:39.022673] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:30.961 [2024-05-13 05:55:39.022685] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8634:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:30.961 [2024-05-13 05:55:39.022691] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022696] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022701] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022707] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.961 passed 00:03:30.961 Test: claim_v2_existing_writer ...passed 00:03:30.961 Test: claim_v2_existing_v1 ...passed 00:03:30.961 Test: claim_v1_existing_v2 ...[2024-05-13 05:55:39.022712] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022755] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8634:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:30.961 [2024-05-13 05:55:39.022769] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:30.961 [2024-05-13 05:55:39.022774] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8599:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:30.961 [2024-05-13 05:55:39.022787] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022792] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022797] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022809] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022815] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:30.961 [2024-05-13 05:55:39.022821] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:30.961 passed 00:03:30.961 Test: examine_claimed ...passed 00:03:30.961 00:03:30.961 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.961 suites 1 1 n/a 0 0 00:03:30.961 tests 59 59 59 0 0 00:03:30.961 asserts 4599 4599 4599 0 n/a 00:03:30.961 00:03:30.961 Elapsed time = 0.047 seconds 00:03:30.961 [2024-05-13 05:55:39.022844] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:30.961 05:55:39 -- unit/unittest.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:30.961 00:03:30.961 00:03:30.961 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.961 http://cunit.sourceforge.net/ 00:03:30.961 00:03:30.961 00:03:30.961 Suite: nvme 00:03:30.961 Test: test_create_ctrlr ...passed 00:03:30.961 Test: test_reset_ctrlr ...[2024-05-13 05:55:39.029703] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.961 passed 00:03:30.961 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:30.961 Test: test_failover_ctrlr ...passed 00:03:30.961 Test: test_race_between_failover_and_add_secondary_trid ...[2024-05-13 05:55:39.030096] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.961 [2024-05-13 05:55:39.030122] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.961 passed 00:03:30.961 Test: test_pending_reset ...[2024-05-13 05:55:39.030140] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.961 [2024-05-13 05:55:39.030295] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.961 [2024-05-13 05:55:39.030341] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.961 passed 00:03:30.961 Test: test_attach_ctrlr ...passed 00:03:30.961 Test: test_aer_cb ...[2024-05-13 05:55:39.030410] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:30.961 passed 00:03:30.961 Test: test_submit_nvme_cmd ...passed 00:03:30.961 Test: test_add_remove_trid ...passed 00:03:30.961 Test: test_abort ...[2024-05-13 05:55:39.030650] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:30.961 passed 00:03:30.961 Test: test_get_io_qpair ...passed 00:03:30.961 Test: test_bdev_unregister ...passed 00:03:30.961 Test: test_compare_ns ...passed 00:03:30.961 Test: test_init_ana_log_page ...passed 00:03:30.961 Test: test_get_memory_domains ...passed 00:03:30.962 Test: test_reconnect_qpair ...[2024-05-13 05:55:39.030933] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_create_bdev_ctrlr ...[2024-05-13 05:55:39.030986] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:30.962 passed 00:03:30.962 Test: test_add_multi_ns_to_bdev ...[2024-05-13 05:55:39.031117] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:30.962 passed 00:03:30.962 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:30.962 Test: test_admin_path ...passed 00:03:30.962 Test: test_reset_bdev_ctrlr ...passed 00:03:30.962 Test: test_find_io_path ...passed 00:03:30.962 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:30.962 Test: test_retry_io_for_io_path_error ...passed 00:03:30.962 Test: test_retry_io_count ...passed 00:03:30.962 Test: test_concurrent_read_ana_log_page ...passed 00:03:30.962 Test: test_retry_io_for_ana_error ...passed 00:03:30.962 Test: test_check_io_error_resiliency_params ...[2024-05-13 05:55:39.031655] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:30.962 [2024-05-13 05:55:39.031671] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:30.962 [2024-05-13 05:55:39.031681] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:30.962 [2024-05-13 05:55:39.031691] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:30.962 [2024-05-13 05:55:39.031702] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:30.962 [2024-05-13 05:55:39.031717] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:30.962 [2024-05-13 05:55:39.031727] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:30.962 [2024-05-13 05:55:39.031737] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:30.962 passed 00:03:30.962 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:03:30.962 Test: test_reconnect_ctrlr ...[2024-05-13 05:55:39.031746] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:30.962 [2024-05-13 05:55:39.031818] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.031841] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.031883] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.031903] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_retry_failover_ctrlr ...[2024-05-13 05:55:39.031922] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.031964] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_fail_path ...[2024-05-13 05:55:39.032022] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.032060] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.032083] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 [2024-05-13 05:55:39.032098] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_nvme_ns_cmp ...passed 00:03:30.962 Test: test_ana_transition ...[2024-05-13 05:55:39.032115] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_set_preferred_path ...passed 00:03:30.962 Test: test_find_next_io_path ...passed 00:03:30.962 Test: test_find_io_path_min_qd ...passed 00:03:30.962 Test: test_disable_auto_failback ...[2024-05-13 05:55:39.032274] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_set_multipath_policy ...passed 00:03:30.962 Test: test_uuid_generation ...passed 00:03:30.962 Test: test_retry_io_to_same_path ...passed 00:03:30.962 Test: test_race_between_reset_and_disconnected ...passed 00:03:30.962 Test: test_ctrlr_op_rpc ...passed 00:03:30.962 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:30.962 Test: test_disable_enable_ctrlr ...[2024-05-13 05:55:39.065909] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_delete_ctrlr_done ...[2024-05-13 05:55:39.065949] /usr/home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:30.962 passed 00:03:30.962 Test: test_ns_remove_during_reset ...passed 00:03:30.962 00:03:30.962 Run Summary: Type Total Ran Passed Failed Inactive 00:03:30.962 suites 1 1 n/a 0 0 00:03:30.962 tests 48 48 48 0 0 00:03:30.962 asserts 3553 3553 3553 0 n/a 00:03:30.962 00:03:30.962 Elapsed time = 0.016 seconds 00:03:30.962 05:55:39 -- unit/unittest.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:30.962 Test Options 00:03:30.962 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:30.962 00:03:30.962 00:03:30.962 CUnit - A unit testing framework for C - Version 2.1-3 00:03:30.962 http://cunit.sourceforge.net/ 00:03:30.962 00:03:30.962 00:03:30.962 Suite: raid 00:03:30.962 Test: test_create_raid ...passed 00:03:30.962 Test: test_create_raid_superblock ...passed 00:03:30.962 Test: test_delete_raid ...passed 00:03:30.962 Test: test_create_raid_invalid_args ...[2024-05-13 05:55:39.078753] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:30.962 [2024-05-13 05:55:39.079139] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:30.962 [2024-05-13 05:55:39.079290] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:30.962 [2024-05-13 05:55:39.079377] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:30.962 [2024-05-13 05:55:39.079571] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:30.962 passed 00:03:30.962 Test: test_delete_raid_invalid_args ...passed 00:03:30.962 Test: test_io_channel ...passed 00:03:30.962 Test: test_reset_io ...passed 00:03:30.962 Test: test_write_io ...passed 00:03:30.962 Test: test_read_io ...passed 00:03:31.533 Test: test_unmap_io ...passed 00:03:31.533 Test: test_io_failure ...passed 00:03:31.533 Test: test_multi_raid_no_io ...passed 00:03:31.533 Test: test_multi_raid_with_io ...[2024-05-13 05:55:39.789848] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:03:31.533 passed 00:03:31.533 Test: test_io_type_supported ...passed 00:03:31.533 Test: test_raid_json_dump_info ...passed 00:03:31.533 Test: test_context_size ...passed 00:03:31.533 Test: test_raid_level_conversions ...passed 00:03:31.533 Test: test_raid_process ...passed 00:03:31.533 Test: test_raid_io_split ...passed 00:03:31.533 00:03:31.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.533 suites 1 1 n/a 0 0 00:03:31.533 tests 19 19 19 0 0 00:03:31.533 asserts 177879 177879 177879 0 n/a 00:03:31.533 00:03:31.533 Elapsed time = 0.711 seconds 00:03:31.533 05:55:39 -- unit/unittest.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:31.533 00:03:31.533 00:03:31.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.533 http://cunit.sourceforge.net/ 00:03:31.533 00:03:31.533 00:03:31.533 Suite: raid_sb 00:03:31.533 Test: test_raid_bdev_write_superblock ...passed 00:03:31.533 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:31.533 Test: test_raid_bdev_parse_superblock ...[2024-05-13 05:55:39.803477] /usr/home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 121:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:31.533 passed 00:03:31.533 00:03:31.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.534 suites 1 1 n/a 0 0 00:03:31.534 tests 3 3 3 0 0 00:03:31.534 asserts 32 32 32 0 n/a 00:03:31.534 00:03:31.534 Elapsed time = 0.000 seconds 00:03:31.534 05:55:39 -- unit/unittest.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:31.534 00:03:31.534 00:03:31.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.534 http://cunit.sourceforge.net/ 00:03:31.534 00:03:31.534 00:03:31.534 Suite: concat 00:03:31.534 Test: test_concat_start ...passed 00:03:31.534 Test: test_concat_rw ...passed 00:03:31.534 Test: test_concat_null_payload ...passed 00:03:31.534 00:03:31.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.534 suites 1 1 n/a 0 0 00:03:31.534 tests 3 3 3 0 0 00:03:31.534 asserts 8097 8097 8097 0 n/a 00:03:31.534 00:03:31.534 Elapsed time = 0.000 seconds 00:03:31.534 05:55:39 -- unit/unittest.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:31.534 00:03:31.534 00:03:31.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.534 http://cunit.sourceforge.net/ 00:03:31.534 00:03:31.534 00:03:31.534 Suite: raid1 00:03:31.534 Test: test_raid1_start ...passed 00:03:31.534 Test: test_raid1_read_balancing ...passed 00:03:31.534 00:03:31.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.534 suites 1 1 n/a 0 0 00:03:31.534 tests 2 2 2 0 0 00:03:31.534 asserts 2856 2856 2856 0 n/a 00:03:31.534 00:03:31.534 Elapsed time = 0.000 seconds 00:03:31.534 05:55:39 -- unit/unittest.sh@26 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:31.534 00:03:31.534 00:03:31.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.534 http://cunit.sourceforge.net/ 00:03:31.534 00:03:31.534 00:03:31.534 Suite: zone 00:03:31.534 Test: test_zone_get_operation ...passed 00:03:31.534 Test: test_bdev_zone_get_info ...passed 00:03:31.534 Test: test_bdev_zone_management ...passed 00:03:31.534 Test: test_bdev_zone_append ...passed 00:03:31.534 Test: test_bdev_zone_append_with_md ...passed 00:03:31.534 Test: test_bdev_zone_appendv ...passed 00:03:31.534 Test: test_bdev_zone_appendv_with_md ...passed 00:03:31.534 Test: test_bdev_io_get_append_location ...passed 00:03:31.534 00:03:31.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.534 suites 1 1 n/a 0 0 00:03:31.534 tests 8 8 8 0 0 00:03:31.534 asserts 94 94 94 0 n/a 00:03:31.534 00:03:31.534 Elapsed time = 0.000 seconds 00:03:31.534 05:55:39 -- unit/unittest.sh@27 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:31.534 00:03:31.534 00:03:31.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.534 http://cunit.sourceforge.net/ 00:03:31.534 00:03:31.534 00:03:31.534 Suite: gpt_parse 00:03:31.534 Test: test_parse_mbr_and_primary ...[2024-05-13 05:55:39.839180] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:31.534 [2024-05-13 05:55:39.839525] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:31.534 [2024-05-13 05:55:39.839596] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:31.534 [2024-05-13 05:55:39.839627] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:31.534 [2024-05-13 05:55:39.839661] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:31.534 [2024-05-13 05:55:39.839692] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:31.534 passed 00:03:31.534 Test: test_parse_secondary ...[2024-05-13 05:55:39.840013] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:31.534 [2024-05-13 05:55:39.840045] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:31.534 [2024-05-13 05:55:39.840077] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:31.534 [2024-05-13 05:55:39.840106] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:31.534 passed 00:03:31.534 Test: test_check_mbr ...[2024-05-13 05:55:39.840421] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:31.534 passed 00:03:31.534 Test: test_read_header ...[2024-05-13 05:55:39.840454] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:31.534 [2024-05-13 05:55:39.840495] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:31.534 [2024-05-13 05:55:39.840527] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:31.534 [2024-05-13 05:55:39.840559] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:31.534 [2024-05-13 05:55:39.840592] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:31.534 [2024-05-13 05:55:39.840626] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:31.534 [2024-05-13 05:55:39.840655] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:31.534 passed 00:03:31.534 Test: test_read_partitions ...[2024-05-13 05:55:39.840698] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:31.534 [2024-05-13 05:55:39.840731] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:31.534 [2024-05-13 05:55:39.840784] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:31.534 [2024-05-13 05:55:39.840817] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:31.534 [2024-05-13 05:55:39.840986] /usr/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:31.534 passed 00:03:31.534 00:03:31.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.534 suites 1 1 n/a 0 0 00:03:31.534 tests 5 5 5 0 0 00:03:31.534 asserts 33 33 33 0 n/a 00:03:31.534 00:03:31.534 Elapsed time = 0.000 seconds 00:03:31.534 05:55:39 -- unit/unittest.sh@28 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:31.795 00:03:31.795 00:03:31.795 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.795 http://cunit.sourceforge.net/ 00:03:31.795 00:03:31.795 00:03:31.795 Suite: bdev_part 00:03:31.795 Test: part_test ...[2024-05-13 05:55:39.856124] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:03:31.795 passed 00:03:31.795 Test: part_free_test ...passed 00:03:31.795 Test: part_get_io_channel_test ...passed 00:03:31.795 Test: part_construct_ext ...passed 00:03:31.795 00:03:31.795 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.795 suites 1 1 n/a 0 0 00:03:31.795 tests 4 4 4 0 0 00:03:31.795 asserts 48 48 48 0 n/a 00:03:31.795 00:03:31.795 Elapsed time = 0.016 seconds 00:03:31.795 05:55:39 -- unit/unittest.sh@29 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:31.795 00:03:31.795 00:03:31.795 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.795 http://cunit.sourceforge.net/ 00:03:31.795 00:03:31.795 00:03:31.795 Suite: scsi_nvme_suite 00:03:31.795 Test: scsi_nvme_translate_test ...passed 00:03:31.795 00:03:31.795 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.795 suites 1 1 n/a 0 0 00:03:31.795 tests 1 1 1 0 0 00:03:31.795 asserts 104 104 104 0 n/a 00:03:31.795 00:03:31.795 Elapsed time = 0.000 seconds 00:03:31.795 05:55:39 -- unit/unittest.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:31.795 00:03:31.795 00:03:31.795 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.795 http://cunit.sourceforge.net/ 00:03:31.795 00:03:31.795 00:03:31.795 Suite: lvol 00:03:31.795 Test: ut_lvs_init ...[2024-05-13 05:55:39.877900] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:31.795 [2024-05-13 05:55:39.878318] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:31.795 passed 00:03:31.795 Test: ut_lvol_init ...passed 00:03:31.795 Test: ut_lvol_snapshot ...passed 00:03:31.795 Test: ut_lvol_clone ...passed 00:03:31.795 Test: ut_lvs_destroy ...passed 00:03:31.795 Test: ut_lvs_unload ...passed 00:03:31.795 Test: ut_lvol_resize ...[2024-05-13 05:55:39.878492] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:31.795 passed 00:03:31.795 Test: ut_lvol_set_read_only ...passed 00:03:31.795 Test: ut_lvol_hotremove ...passed 00:03:31.795 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:31.795 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:31.795 Test: ut_lvol_read_write ...passed 00:03:31.795 Test: ut_vbdev_lvol_submit_request ...passed 00:03:31.795 Test: ut_lvol_examine_config ...passed 00:03:31.795 Test: ut_lvol_examine_disk ...[2024-05-13 05:55:39.878647] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:31.795 passed 00:03:31.795 Test: ut_lvol_rename ...[2024-05-13 05:55:39.878731] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:31.795 passed 00:03:31.795 Test: ut_bdev_finish ...[2024-05-13 05:55:39.878752] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:31.795 passed 00:03:31.795 Test: ut_lvs_rename ...passed 00:03:31.795 Test: ut_lvol_seek ...passed 00:03:31.795 Test: ut_esnap_dev_create ...[2024-05-13 05:55:39.878837] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:31.795 [2024-05-13 05:55:39.878860] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:31.795 [2024-05-13 05:55:39.878881] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:31.795 [2024-05-13 05:55:39.878922] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1901:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:03:31.795 passed 00:03:31.795 Test: ut_lvol_esnap_clone_bad_args ...[2024-05-13 05:55:39.878966] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:31.795 passed[2024-05-13 05:55:39.878987] /usr/home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:03:31.795 00:03:31.795 00:03:31.795 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.795 suites 1 1 n/a 0 0 00:03:31.795 tests 21 21 21 0 0 00:03:31.795 asserts 712 712 712 0 n/a 00:03:31.795 00:03:31.795 Elapsed time = 0.000 seconds 00:03:31.795 05:55:39 -- unit/unittest.sh@31 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:31.795 00:03:31.795 00:03:31.795 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.795 http://cunit.sourceforge.net/ 00:03:31.795 00:03:31.795 00:03:31.795 Suite: zone_block 00:03:31.795 Test: test_zone_block_create ...passed 00:03:31.795 Test: test_zone_block_create_invalid ...[2024-05-13 05:55:39.898915] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:31.795 [2024-05-13 05:55:39.899146] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-13 05:55:39.899184] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:31.795 [2024-05-13 05:55:39.899199] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-05-13 05:55:39.899215] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:31.795 passed 00:03:31.795 Test: test_get_zone_info ...[2024-05-13 05:55:39.899228] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-13 05:55:39.899242] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:31.796 [2024-05-13 05:55:39.899267] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-05-13 05:55:39.899349] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.899371] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_supported_io_types ...[2024-05-13 05:55:39.899387] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_reset_zone ...[2024-05-13 05:55:39.899458] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.899475] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_open_zone ...[2024-05-13 05:55:39.899517] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.899798] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.899824] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_zone_write ...[2024-05-13 05:55:39.899875] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:31.796 [2024-05-13 05:55:39.899888] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.899904] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:31.796 [2024-05-13 05:55:39.899916] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.900576] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:31.796 [2024-05-13 05:55:39.900599] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.900616] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:31.796 [2024-05-13 05:55:39.900628] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901382] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:31.796 [2024-05-13 05:55:39.901405] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_zone_read ...[2024-05-13 05:55:39.901448] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:31.796 [2024-05-13 05:55:39.901462] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901479] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:31.796 [2024-05-13 05:55:39.901491] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901550] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:31.796 [2024-05-13 05:55:39.901563] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_close_zone ...[2024-05-13 05:55:39.901600] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901620] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901670] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_finish_zone ...[2024-05-13 05:55:39.901685] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901760] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901776] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 Test: test_append_zone ...[2024-05-13 05:55:39.901816] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:31.796 [2024-05-13 05:55:39.901829] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 [2024-05-13 05:55:39.901845] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:31.796 [2024-05-13 05:55:39.901857] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 passed 00:03:31.796 00:03:31.796 [2024-05-13 05:55:39.903300] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:31.796 [2024-05-13 05:55:39.903326] /usr/home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:31.796 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.796 suites 1 1 n/a 0 0 00:03:31.796 tests 11 11 11 0 0 00:03:31.796 asserts 3437 3437 3437 0 n/a 00:03:31.796 00:03:31.796 Elapsed time = 0.008 seconds 00:03:31.796 05:55:39 -- unit/unittest.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:31.796 00:03:31.796 00:03:31.796 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.796 http://cunit.sourceforge.net/ 00:03:31.796 00:03:31.796 00:03:31.796 Suite: bdev 00:03:31.796 Test: basic ...[2024-05-13 05:55:39.912535] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248619): Operation not permitted (rc=-1) 00:03:31.796 [2024-05-13 05:55:39.912742] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x82e9a3480 (0x248610): Operation not permitted (rc=-1) 00:03:31.796 [2024-05-13 05:55:39.912758] thread.c:2360:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x248619): Operation not permitted (rc=-1) 00:03:31.796 passed 00:03:31.796 Test: unregister_and_close ...passed 00:03:31.796 Test: unregister_and_close_different_threads ...passed 00:03:31.796 Test: basic_qos ...passed 00:03:31.796 Test: put_channel_during_reset ...passed 00:03:31.796 Test: aborted_reset ...passed 00:03:31.796 Test: aborted_reset_no_outstanding_io ...passed 00:03:31.796 Test: io_during_reset ...passed 00:03:31.796 Test: reset_completions ...passed 00:03:31.796 Test: io_during_qos_queue ...passed 00:03:31.796 Test: io_during_qos_reset ...passed 00:03:31.796 Test: enomem ...passed 00:03:31.796 Test: enomem_multi_bdev ...passed 00:03:31.796 Test: enomem_multi_bdev_unregister ...passed 00:03:31.796 Test: enomem_multi_io_target ...passed 00:03:31.796 Test: qos_dynamic_enable ...passed 00:03:31.796 Test: bdev_histograms_mt ...passed 00:03:31.796 Test: bdev_set_io_timeout_mt ...passed 00:03:31.796 Test: lock_lba_range_then_submit_io ...[2024-05-13 05:55:39.960142] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x82e9a3600 not unregistered 00:03:31.796 [2024-05-13 05:55:39.961043] thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x2485f8 already registered (old:0x82e9a3600 new:0x82e9a3780) 00:03:31.796 passed 00:03:31.796 Test: unregister_during_reset ...passed 00:03:31.796 Test: event_notify_and_close ...passed 00:03:31.796 Suite: bdev_wrong_thread 00:03:31.796 Test: spdk_bdev_register_wt ...[2024-05-13 05:55:39.964310] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8360:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x82e96c700 (0x82e96c700) 00:03:31.796 passed 00:03:31.796 Test: spdk_bdev_examine_wt ...passed[2024-05-13 05:55:39.964345] /usr/home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 794:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x82e96c700 (0x82e96c700) 00:03:31.796 00:03:31.796 00:03:31.796 Run Summary: Type Total Ran Passed Failed Inactive 00:03:31.796 suites 2 2 n/a 0 0 00:03:31.796 tests 23 23 23 0 0 00:03:31.796 asserts 601 601 601 0 n/a 00:03:31.796 00:03:31.796 Elapsed time = 0.055 seconds 00:03:31.796 00:03:31.796 real 0m1.001s 00:03:31.796 user 0m0.764s 00:03:31.796 sys 0m0.214s 00:03:31.796 05:55:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.796 05:55:39 -- common/autotest_common.sh@10 -- # set +x 00:03:31.796 ************************************ 00:03:31.796 END TEST unittest_bdev 00:03:31.796 ************************************ 00:03:31.796 05:55:40 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:31.796 05:55:40 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:31.797 05:55:40 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:31.797 05:55:40 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:31.797 05:55:40 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:03:31.797 05:55:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.797 05:55:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.797 05:55:40 -- common/autotest_common.sh@10 -- # set +x 00:03:31.797 ************************************ 00:03:31.797 START TEST unittest_blob_blobfs 00:03:31.797 ************************************ 00:03:31.797 05:55:40 -- common/autotest_common.sh@1104 -- # unittest_blob 00:03:31.797 05:55:40 -- unit/unittest.sh@38 -- # [[ -e /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:31.797 05:55:40 -- unit/unittest.sh@39 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:31.797 00:03:31.797 00:03:31.797 CUnit - A unit testing framework for C - Version 2.1-3 00:03:31.797 http://cunit.sourceforge.net/ 00:03:31.797 00:03:31.797 00:03:31.797 Suite: blob_nocopy_noextent 00:03:31.797 Test: blob_init ...[2024-05-13 05:55:40.045406] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:31.797 passed 00:03:31.797 Test: blob_thin_provision ...passed 00:03:31.797 Test: blob_read_only ...passed 00:03:32.057 Test: bs_load ...passed 00:03:32.057 Test: bs_load_custom_cluster_size ...[2024-05-13 05:55:40.114049] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:32.057 passed 00:03:32.057 Test: bs_load_after_failed_grow ...passed 00:03:32.057 Test: bs_cluster_sz ...[2024-05-13 05:55:40.133011] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:32.057 [2024-05-13 05:55:40.133067] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:32.057 [2024-05-13 05:55:40.133077] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:32.057 passed 00:03:32.057 Test: bs_resize_md ...passed 00:03:32.057 Test: bs_destroy ...passed 00:03:32.057 Test: bs_type ...passed 00:03:32.057 Test: bs_super_block ...passed 00:03:32.057 Test: bs_test_recover_cluster_count ...passed 00:03:32.057 Test: bs_grow_live ...passed 00:03:32.057 Test: bs_grow_live_no_space ...passed 00:03:32.057 Test: bs_test_grow ...passed 00:03:32.057 Test: blob_serialize_test ...passed 00:03:32.057 Test: super_block_crc ...passed 00:03:32.057 Test: blob_thin_prov_write_count_io ...passed 00:03:32.057 Test: bs_load_iter_test ...passed 00:03:32.057 Test: blob_relations ...[2024-05-13 05:55:40.245267] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:32.058 [2024-05-13 05:55:40.245329] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.058 [2024-05-13 05:55:40.245389] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:32.058 [2024-05-13 05:55:40.245396] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.058 passed 00:03:32.058 Test: blob_relations2 ...[2024-05-13 05:55:40.255341] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:32.058 [2024-05-13 05:55:40.255360] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.058 [2024-05-13 05:55:40.255367] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:32.058 [2024-05-13 05:55:40.255388] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.058 [2024-05-13 05:55:40.255471] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:32.058 [2024-05-13 05:55:40.255478] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.058 [2024-05-13 05:55:40.255504] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:32.058 [2024-05-13 05:55:40.255511] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.058 passed 00:03:32.058 Test: blob_relations3 ...passed 00:03:32.318 Test: blobstore_clean_power_failure ...passed 00:03:32.318 Test: blob_delete_snapshot_power_failure ...[2024-05-13 05:55:40.385776] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:32.318 [2024-05-13 05:55:40.395218] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:32.318 [2024-05-13 05:55:40.395279] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:32.318 [2024-05-13 05:55:40.395287] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.318 [2024-05-13 05:55:40.404729] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:32.318 [2024-05-13 05:55:40.404784] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:32.318 [2024-05-13 05:55:40.404791] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:32.318 [2024-05-13 05:55:40.404797] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.318 [2024-05-13 05:55:40.414196] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:32.318 [2024-05-13 05:55:40.414220] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.318 [2024-05-13 05:55:40.423740] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:32.318 [2024-05-13 05:55:40.423764] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.318 [2024-05-13 05:55:40.433239] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:32.318 [2024-05-13 05:55:40.433270] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:32.318 passed 00:03:32.318 Test: blob_create_snapshot_power_failure ...[2024-05-13 05:55:40.461148] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:32.318 [2024-05-13 05:55:40.479725] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:32.318 [2024-05-13 05:55:40.489153] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:32.318 passed 00:03:32.318 Test: blob_io_unit ...passed 00:03:32.318 Test: blob_io_unit_compatibility ...passed 00:03:32.318 Test: blob_ext_md_pages ...passed 00:03:32.318 Test: blob_esnap_io_4096_4096 ...passed 00:03:32.318 Test: blob_esnap_io_512_512 ...passed 00:03:32.318 Test: blob_esnap_io_4096_512 ...passed 00:03:32.318 Test: blob_esnap_io_512_4096 ...passed 00:03:32.318 Suite: blob_bs_nocopy_noextent 00:03:32.578 Test: blob_open ...passed 00:03:32.578 Test: blob_create ...[2024-05-13 05:55:40.666126] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:32.578 passed 00:03:32.578 Test: blob_create_loop ...passed 00:03:32.578 Test: blob_create_fail ...[2024-05-13 05:55:40.732317] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:32.578 passed 00:03:32.578 Test: blob_create_internal ...passed 00:03:32.578 Test: blob_create_zero_extent ...passed 00:03:32.578 Test: blob_snapshot ...passed 00:03:32.578 Test: blob_clone ...passed 00:03:32.578 Test: blob_inflate ...[2024-05-13 05:55:40.877094] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:32.578 passed 00:03:32.838 Test: blob_delete ...passed 00:03:32.838 Test: blob_resize_test ...[2024-05-13 05:55:40.931860] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:32.838 passed 00:03:32.838 Test: channel_ops ...passed 00:03:32.838 Test: blob_super ...passed 00:03:32.838 Test: blob_rw_verify_iov ...passed 00:03:32.838 Test: blob_unmap ...passed 00:03:32.838 Test: blob_iter ...passed 00:03:32.838 Test: blob_parse_md ...passed 00:03:32.838 Test: bs_load_pending_removal ...passed 00:03:33.098 Test: bs_unload ...[2024-05-13 05:55:41.155269] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:33.098 passed 00:03:33.098 Test: bs_usable_clusters ...passed 00:03:33.098 Test: blob_crc ...[2024-05-13 05:55:41.210650] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:33.098 [2024-05-13 05:55:41.210707] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:33.098 passed 00:03:33.098 Test: blob_flags ...passed 00:03:33.098 Test: bs_version ...passed 00:03:33.098 Test: blob_set_xattrs_test ...[2024-05-13 05:55:41.295658] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:33.098 [2024-05-13 05:55:41.295735] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:33.098 passed 00:03:33.098 Test: blob_thin_prov_alloc ...passed 00:03:33.098 Test: blob_insert_cluster_msg_test ...passed 00:03:33.098 Test: blob_thin_prov_rw ...passed 00:03:33.358 Test: blob_thin_prov_rle ...passed 00:03:33.358 Test: blob_thin_prov_rw_iov ...passed 00:03:33.358 Test: blob_snapshot_rw ...passed 00:03:33.358 Test: blob_snapshot_rw_iov ...passed 00:03:33.358 Test: blob_inflate_rw ...passed 00:03:33.358 Test: blob_snapshot_freeze_io ...passed 00:03:33.358 Test: blob_operation_split_rw ...passed 00:03:33.671 Test: blob_operation_split_rw_iov ...passed 00:03:33.671 Test: blob_simultaneous_operations ...[2024-05-13 05:55:41.714044] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:33.671 [2024-05-13 05:55:41.714109] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.671 [2024-05-13 05:55:41.714344] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:33.671 [2024-05-13 05:55:41.714358] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.671 [2024-05-13 05:55:41.717401] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:33.671 [2024-05-13 05:55:41.717429] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.671 [2024-05-13 05:55:41.717443] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:33.671 [2024-05-13 05:55:41.717448] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:33.671 passed 00:03:33.671 Test: blob_persist_test ...passed 00:03:33.671 Test: blob_decouple_snapshot ...passed 00:03:33.671 Test: blob_seek_io_unit ...passed 00:03:33.671 Test: blob_nested_freezes ...passed 00:03:33.671 Suite: blob_blob_nocopy_noextent 00:03:33.671 Test: blob_write ...passed 00:03:33.671 Test: blob_read ...passed 00:03:33.671 Test: blob_rw_verify ...passed 00:03:33.930 Test: blob_rw_verify_iov_nomem ...passed 00:03:33.931 Test: blob_rw_iov_read_only ...passed 00:03:33.931 Test: blob_xattr ...passed 00:03:33.931 Test: blob_dirty_shutdown ...passed 00:03:33.931 Test: blob_is_degraded ...passed 00:03:33.931 Suite: blob_esnap_bs_nocopy_noextent 00:03:33.931 Test: blob_esnap_create ...passed 00:03:33.931 Test: blob_esnap_thread_add_remove ...passed 00:03:33.931 Test: blob_esnap_clone_snapshot ...passed 00:03:33.931 Test: blob_esnap_clone_inflate ...passed 00:03:33.931 Test: blob_esnap_clone_decouple ...passed 00:03:34.190 Test: blob_esnap_clone_reload ...passed 00:03:34.190 Test: blob_esnap_hotplug ...passed 00:03:34.190 Suite: blob_nocopy_extent 00:03:34.190 Test: blob_init ...[2024-05-13 05:55:42.278875] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:34.190 passed 00:03:34.190 Test: blob_thin_provision ...passed 00:03:34.190 Test: blob_read_only ...passed 00:03:34.190 Test: bs_load ...passed 00:03:34.190 Test: bs_load_custom_cluster_size ...[2024-05-13 05:55:42.316312] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:34.190 passed 00:03:34.190 Test: bs_load_after_failed_grow ...passed 00:03:34.190 Test: bs_cluster_sz ...[2024-05-13 05:55:42.335484] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:34.190 [2024-05-13 05:55:42.335535] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:34.190 [2024-05-13 05:55:42.335561] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:34.190 passed 00:03:34.190 Test: bs_resize_md ...passed 00:03:34.190 Test: bs_destroy ...passed 00:03:34.190 Test: bs_type ...passed 00:03:34.190 Test: bs_super_block ...passed 00:03:34.190 Test: bs_test_recover_cluster_count ...passed 00:03:34.190 Test: bs_grow_live ...passed 00:03:34.190 Test: bs_grow_live_no_space ...passed 00:03:34.190 Test: bs_test_grow ...passed 00:03:34.190 Test: blob_serialize_test ...passed 00:03:34.190 Test: super_block_crc ...passed 00:03:34.190 Test: blob_thin_prov_write_count_io ...passed 00:03:34.190 Test: bs_load_iter_test ...passed 00:03:34.190 Test: blob_relations ...[2024-05-13 05:55:42.449104] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:34.190 [2024-05-13 05:55:42.449162] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.190 [2024-05-13 05:55:42.449230] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:34.190 [2024-05-13 05:55:42.449237] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.190 passed 00:03:34.191 Test: blob_relations2 ...[2024-05-13 05:55:42.459345] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:34.191 [2024-05-13 05:55:42.459381] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.191 [2024-05-13 05:55:42.459389] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:34.191 [2024-05-13 05:55:42.459394] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.191 [2024-05-13 05:55:42.459482] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:34.191 [2024-05-13 05:55:42.459489] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.191 [2024-05-13 05:55:42.459535] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:34.191 [2024-05-13 05:55:42.459541] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.191 passed 00:03:34.191 Test: blob_relations3 ...passed 00:03:34.450 Test: blobstore_clean_power_failure ...passed 00:03:34.450 Test: blob_delete_snapshot_power_failure ...[2024-05-13 05:55:42.592090] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:34.450 [2024-05-13 05:55:42.601629] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:34.450 [2024-05-13 05:55:42.611219] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:34.450 [2024-05-13 05:55:42.611262] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:34.450 [2024-05-13 05:55:42.611270] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.450 [2024-05-13 05:55:42.620859] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:34.450 [2024-05-13 05:55:42.620890] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:34.450 [2024-05-13 05:55:42.620896] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:34.450 [2024-05-13 05:55:42.620902] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.450 [2024-05-13 05:55:42.630510] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:34.450 [2024-05-13 05:55:42.630531] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:34.450 [2024-05-13 05:55:42.630537] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:34.450 [2024-05-13 05:55:42.630543] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.450 [2024-05-13 05:55:42.640100] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:34.450 [2024-05-13 05:55:42.640130] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.450 [2024-05-13 05:55:42.649691] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:34.450 [2024-05-13 05:55:42.649732] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.450 [2024-05-13 05:55:42.659369] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:34.450 [2024-05-13 05:55:42.659405] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:34.450 passed 00:03:34.450 Test: blob_create_snapshot_power_failure ...[2024-05-13 05:55:42.687920] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:34.450 [2024-05-13 05:55:42.697439] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:34.450 [2024-05-13 05:55:42.716188] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:34.450 [2024-05-13 05:55:42.725555] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:34.450 passed 00:03:34.710 Test: blob_io_unit ...passed 00:03:34.710 Test: blob_io_unit_compatibility ...passed 00:03:34.710 Test: blob_ext_md_pages ...passed 00:03:34.710 Test: blob_esnap_io_4096_4096 ...passed 00:03:34.710 Test: blob_esnap_io_512_512 ...passed 00:03:34.710 Test: blob_esnap_io_4096_512 ...passed 00:03:34.710 Test: blob_esnap_io_512_4096 ...passed 00:03:34.710 Suite: blob_bs_nocopy_extent 00:03:34.710 Test: blob_open ...passed 00:03:34.710 Test: blob_create ...[2024-05-13 05:55:42.903351] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:34.710 passed 00:03:34.710 Test: blob_create_loop ...passed 00:03:34.710 Test: blob_create_fail ...[2024-05-13 05:55:42.970804] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:34.710 passed 00:03:34.710 Test: blob_create_internal ...passed 00:03:34.970 Test: blob_create_zero_extent ...passed 00:03:34.970 Test: blob_snapshot ...passed 00:03:34.970 Test: blob_clone ...passed 00:03:34.970 Test: blob_inflate ...[2024-05-13 05:55:43.117324] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:34.970 passed 00:03:34.970 Test: blob_delete ...passed 00:03:34.970 Test: blob_resize_test ...[2024-05-13 05:55:43.173109] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:34.970 passed 00:03:34.970 Test: channel_ops ...passed 00:03:34.970 Test: blob_super ...passed 00:03:34.970 Test: blob_rw_verify_iov ...passed 00:03:35.228 Test: blob_unmap ...passed 00:03:35.228 Test: blob_iter ...passed 00:03:35.228 Test: blob_parse_md ...passed 00:03:35.228 Test: bs_load_pending_removal ...passed 00:03:35.228 Test: bs_unload ...[2024-05-13 05:55:43.394789] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:35.228 passed 00:03:35.228 Test: bs_usable_clusters ...passed 00:03:35.228 Test: blob_crc ...[2024-05-13 05:55:43.450215] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:35.228 [2024-05-13 05:55:43.450278] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:35.228 passed 00:03:35.228 Test: blob_flags ...passed 00:03:35.228 Test: bs_version ...passed 00:03:35.228 Test: blob_set_xattrs_test ...[2024-05-13 05:55:43.534392] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:35.228 [2024-05-13 05:55:43.534471] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:35.488 passed 00:03:35.488 Test: blob_thin_prov_alloc ...passed 00:03:35.488 Test: blob_insert_cluster_msg_test ...passed 00:03:35.488 Test: blob_thin_prov_rw ...passed 00:03:35.488 Test: blob_thin_prov_rle ...passed 00:03:35.488 Test: blob_thin_prov_rw_iov ...passed 00:03:35.488 Test: blob_snapshot_rw ...passed 00:03:35.488 Test: blob_snapshot_rw_iov ...passed 00:03:35.747 Test: blob_inflate_rw ...passed 00:03:35.747 Test: blob_snapshot_freeze_io ...passed 00:03:35.747 Test: blob_operation_split_rw ...passed 00:03:35.747 Test: blob_operation_split_rw_iov ...passed 00:03:35.747 Test: blob_simultaneous_operations ...[2024-05-13 05:55:43.941679] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.747 [2024-05-13 05:55:43.941746] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.747 [2024-05-13 05:55:43.941979] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.747 [2024-05-13 05:55:43.941994] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.747 [2024-05-13 05:55:43.945029] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.747 [2024-05-13 05:55:43.945069] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.747 [2024-05-13 05:55:43.945083] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:35.747 [2024-05-13 05:55:43.945088] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:35.747 passed 00:03:35.747 Test: blob_persist_test ...passed 00:03:35.747 Test: blob_decouple_snapshot ...passed 00:03:35.747 Test: blob_seek_io_unit ...passed 00:03:36.005 Test: blob_nested_freezes ...passed 00:03:36.005 Suite: blob_blob_nocopy_extent 00:03:36.005 Test: blob_write ...passed 00:03:36.005 Test: blob_read ...passed 00:03:36.005 Test: blob_rw_verify ...passed 00:03:36.005 Test: blob_rw_verify_iov_nomem ...passed 00:03:36.005 Test: blob_rw_iov_read_only ...passed 00:03:36.005 Test: blob_xattr ...passed 00:03:36.005 Test: blob_dirty_shutdown ...passed 00:03:36.005 Test: blob_is_degraded ...passed 00:03:36.005 Suite: blob_esnap_bs_nocopy_extent 00:03:36.265 Test: blob_esnap_create ...passed 00:03:36.265 Test: blob_esnap_thread_add_remove ...passed 00:03:36.265 Test: blob_esnap_clone_snapshot ...passed 00:03:36.265 Test: blob_esnap_clone_inflate ...passed 00:03:36.265 Test: blob_esnap_clone_decouple ...passed 00:03:36.265 Test: blob_esnap_clone_reload ...passed 00:03:36.265 Test: blob_esnap_hotplug ...passed 00:03:36.265 Suite: blob_copy_noextent 00:03:36.265 Test: blob_init ...[2024-05-13 05:55:44.503958] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:36.265 passed 00:03:36.265 Test: blob_thin_provision ...passed 00:03:36.265 Test: blob_read_only ...passed 00:03:36.265 Test: bs_load ...passed 00:03:36.265 Test: bs_load_custom_cluster_size ...[2024-05-13 05:55:44.540918] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:36.265 passed 00:03:36.265 Test: bs_load_after_failed_grow ...passed 00:03:36.265 Test: bs_cluster_sz ...[2024-05-13 05:55:44.559501] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:36.265 [2024-05-13 05:55:44.559551] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:36.265 [2024-05-13 05:55:44.559577] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:36.265 passed 00:03:36.525 Test: bs_resize_md ...passed 00:03:36.525 Test: bs_destroy ...passed 00:03:36.525 Test: bs_type ...passed 00:03:36.525 Test: bs_super_block ...passed 00:03:36.525 Test: bs_test_recover_cluster_count ...passed 00:03:36.525 Test: bs_grow_live ...passed 00:03:36.525 Test: bs_grow_live_no_space ...passed 00:03:36.525 Test: bs_test_grow ...passed 00:03:36.525 Test: blob_serialize_test ...passed 00:03:36.525 Test: super_block_crc ...passed 00:03:36.525 Test: blob_thin_prov_write_count_io ...passed 00:03:36.525 Test: bs_load_iter_test ...passed 00:03:36.525 Test: blob_relations ...[2024-05-13 05:55:44.671318] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:36.525 [2024-05-13 05:55:44.671379] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 [2024-05-13 05:55:44.671433] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:36.525 [2024-05-13 05:55:44.671440] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 passed 00:03:36.525 Test: blob_relations2 ...[2024-05-13 05:55:44.681463] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:36.525 [2024-05-13 05:55:44.681506] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 [2024-05-13 05:55:44.681513] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:36.525 [2024-05-13 05:55:44.681518] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 [2024-05-13 05:55:44.681594] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:36.525 [2024-05-13 05:55:44.681601] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 [2024-05-13 05:55:44.681630] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:36.525 [2024-05-13 05:55:44.681650] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 passed 00:03:36.525 Test: blob_relations3 ...passed 00:03:36.525 Test: blobstore_clean_power_failure ...passed 00:03:36.525 Test: blob_delete_snapshot_power_failure ...[2024-05-13 05:55:44.811298] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:36.525 [2024-05-13 05:55:44.820850] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:36.525 [2024-05-13 05:55:44.820893] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:36.525 [2024-05-13 05:55:44.820900] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.525 [2024-05-13 05:55:44.830329] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:36.525 [2024-05-13 05:55:44.830365] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:36.525 [2024-05-13 05:55:44.830388] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:36.525 [2024-05-13 05:55:44.830394] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.785 [2024-05-13 05:55:44.839797] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:36.785 [2024-05-13 05:55:44.839823] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.785 [2024-05-13 05:55:44.849335] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:36.785 [2024-05-13 05:55:44.849372] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.785 [2024-05-13 05:55:44.858796] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:36.786 [2024-05-13 05:55:44.858832] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:36.786 passed 00:03:36.786 Test: blob_create_snapshot_power_failure ...[2024-05-13 05:55:44.886681] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:36.786 [2024-05-13 05:55:44.905273] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:36.786 [2024-05-13 05:55:44.914718] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:36.786 passed 00:03:36.786 Test: blob_io_unit ...passed 00:03:36.786 Test: blob_io_unit_compatibility ...passed 00:03:36.786 Test: blob_ext_md_pages ...passed 00:03:36.786 Test: blob_esnap_io_4096_4096 ...passed 00:03:36.786 Test: blob_esnap_io_512_512 ...passed 00:03:36.786 Test: blob_esnap_io_4096_512 ...passed 00:03:36.786 Test: blob_esnap_io_512_4096 ...passed 00:03:36.786 Suite: blob_bs_copy_noextent 00:03:36.786 Test: blob_open ...passed 00:03:37.046 Test: blob_create ...[2024-05-13 05:55:45.100765] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:37.046 passed 00:03:37.046 Test: blob_create_loop ...passed 00:03:37.046 Test: blob_create_fail ...[2024-05-13 05:55:45.168698] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:37.046 passed 00:03:37.046 Test: blob_create_internal ...passed 00:03:37.046 Test: blob_create_zero_extent ...passed 00:03:37.046 Test: blob_snapshot ...passed 00:03:37.046 Test: blob_clone ...passed 00:03:37.046 Test: blob_inflate ...[2024-05-13 05:55:45.311868] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:37.046 passed 00:03:37.046 Test: blob_delete ...passed 00:03:37.306 Test: blob_resize_test ...[2024-05-13 05:55:45.366430] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:37.306 passed 00:03:37.306 Test: channel_ops ...passed 00:03:37.306 Test: blob_super ...passed 00:03:37.306 Test: blob_rw_verify_iov ...passed 00:03:37.306 Test: blob_unmap ...passed 00:03:37.306 Test: blob_iter ...passed 00:03:37.306 Test: blob_parse_md ...passed 00:03:37.306 Test: bs_load_pending_removal ...passed 00:03:37.306 Test: bs_unload ...[2024-05-13 05:55:45.587088] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:37.306 passed 00:03:37.566 Test: bs_usable_clusters ...passed 00:03:37.566 Test: blob_crc ...[2024-05-13 05:55:45.642118] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:37.566 [2024-05-13 05:55:45.642174] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:37.566 passed 00:03:37.566 Test: blob_flags ...passed 00:03:37.566 Test: bs_version ...passed 00:03:37.566 Test: blob_set_xattrs_test ...[2024-05-13 05:55:45.725446] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:37.566 [2024-05-13 05:55:45.725508] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:37.566 passed 00:03:37.566 Test: blob_thin_prov_alloc ...passed 00:03:37.566 Test: blob_insert_cluster_msg_test ...passed 00:03:37.566 Test: blob_thin_prov_rw ...passed 00:03:37.566 Test: blob_thin_prov_rle ...passed 00:03:37.825 Test: blob_thin_prov_rw_iov ...passed 00:03:37.825 Test: blob_snapshot_rw ...passed 00:03:37.825 Test: blob_snapshot_rw_iov ...passed 00:03:37.825 Test: blob_inflate_rw ...passed 00:03:37.825 Test: blob_snapshot_freeze_io ...passed 00:03:37.825 Test: blob_operation_split_rw ...passed 00:03:37.825 Test: blob_operation_split_rw_iov ...passed 00:03:37.825 Test: blob_simultaneous_operations ...[2024-05-13 05:55:46.133009] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.825 [2024-05-13 05:55:46.133073] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.825 [2024-05-13 05:55:46.133321] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.825 [2024-05-13 05:55:46.133336] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.825 [2024-05-13 05:55:46.135269] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.825 [2024-05-13 05:55:46.135295] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:37.825 [2024-05-13 05:55:46.135309] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:37.825 [2024-05-13 05:55:46.135315] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.084 passed 00:03:38.084 Test: blob_persist_test ...passed 00:03:38.084 Test: blob_decouple_snapshot ...passed 00:03:38.084 Test: blob_seek_io_unit ...passed 00:03:38.084 Test: blob_nested_freezes ...passed 00:03:38.084 Suite: blob_blob_copy_noextent 00:03:38.084 Test: blob_write ...passed 00:03:38.084 Test: blob_read ...passed 00:03:38.084 Test: blob_rw_verify ...passed 00:03:38.084 Test: blob_rw_verify_iov_nomem ...passed 00:03:38.344 Test: blob_rw_iov_read_only ...passed 00:03:38.344 Test: blob_xattr ...passed 00:03:38.344 Test: blob_dirty_shutdown ...passed 00:03:38.344 Test: blob_is_degraded ...passed 00:03:38.344 Suite: blob_esnap_bs_copy_noextent 00:03:38.344 Test: blob_esnap_create ...passed 00:03:38.344 Test: blob_esnap_thread_add_remove ...passed 00:03:38.344 Test: blob_esnap_clone_snapshot ...passed 00:03:38.344 Test: blob_esnap_clone_inflate ...passed 00:03:38.344 Test: blob_esnap_clone_decouple ...passed 00:03:38.604 Test: blob_esnap_clone_reload ...passed 00:03:38.604 Test: blob_esnap_hotplug ...passed 00:03:38.604 Suite: blob_copy_extent 00:03:38.604 Test: blob_init ...[2024-05-13 05:55:46.686450] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5268:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:38.604 passed 00:03:38.604 Test: blob_thin_provision ...passed 00:03:38.604 Test: blob_read_only ...passed 00:03:38.604 Test: bs_load ...passed 00:03:38.604 Test: bs_load_custom_cluster_size ...[2024-05-13 05:55:46.723506] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 897:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:38.604 passed 00:03:38.604 Test: bs_load_after_failed_grow ...passed 00:03:38.604 Test: bs_cluster_sz ...[2024-05-13 05:55:46.742606] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:38.604 [2024-05-13 05:55:46.742653] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5400:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:38.604 [2024-05-13 05:55:46.742663] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3663:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:38.604 passed 00:03:38.604 Test: bs_resize_md ...passed 00:03:38.604 Test: bs_destroy ...passed 00:03:38.604 Test: bs_type ...passed 00:03:38.604 Test: bs_super_block ...passed 00:03:38.604 Test: bs_test_recover_cluster_count ...passed 00:03:38.604 Test: bs_grow_live ...passed 00:03:38.604 Test: bs_grow_live_no_space ...passed 00:03:38.604 Test: bs_test_grow ...passed 00:03:38.604 Test: blob_serialize_test ...passed 00:03:38.604 Test: super_block_crc ...passed 00:03:38.604 Test: blob_thin_prov_write_count_io ...passed 00:03:38.604 Test: bs_load_iter_test ...passed 00:03:38.604 Test: blob_relations ...[2024-05-13 05:55:46.854637] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.604 [2024-05-13 05:55:46.854702] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.604 [2024-05-13 05:55:46.854765] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.604 [2024-05-13 05:55:46.854772] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.604 passed 00:03:38.604 Test: blob_relations2 ...[2024-05-13 05:55:46.864770] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.604 [2024-05-13 05:55:46.864829] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.604 [2024-05-13 05:55:46.864837] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.604 [2024-05-13 05:55:46.864842] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.604 [2024-05-13 05:55:46.864928] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.604 [2024-05-13 05:55:46.864936] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.604 [2024-05-13 05:55:46.864964] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:38.604 [2024-05-13 05:55:46.864971] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.604 passed 00:03:38.604 Test: blob_relations3 ...passed 00:03:38.864 Test: blobstore_clean_power_failure ...passed 00:03:38.864 Test: blob_delete_snapshot_power_failure ...[2024-05-13 05:55:46.994732] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:38.864 [2024-05-13 05:55:47.004159] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:38.864 [2024-05-13 05:55:47.013590] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:38.864 [2024-05-13 05:55:47.013634] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:38.864 [2024-05-13 05:55:47.013641] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.864 [2024-05-13 05:55:47.023018] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:38.864 [2024-05-13 05:55:47.023060] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:38.864 [2024-05-13 05:55:47.023067] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:38.864 [2024-05-13 05:55:47.023073] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.864 [2024-05-13 05:55:47.032421] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:38.864 [2024-05-13 05:55:47.032447] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:38.864 [2024-05-13 05:55:47.032454] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:38.864 [2024-05-13 05:55:47.032460] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.864 [2024-05-13 05:55:47.041905] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:38.864 [2024-05-13 05:55:47.041932] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.864 [2024-05-13 05:55:47.051450] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:38.864 [2024-05-13 05:55:47.051482] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.864 [2024-05-13 05:55:47.060990] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:38.864 [2024-05-13 05:55:47.061031] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:38.864 passed 00:03:38.864 Test: blob_create_snapshot_power_failure ...[2024-05-13 05:55:47.089177] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:38.864 [2024-05-13 05:55:47.098534] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:38.864 [2024-05-13 05:55:47.117414] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1601:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:38.864 [2024-05-13 05:55:47.126925] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:38.864 passed 00:03:38.864 Test: blob_io_unit ...passed 00:03:38.864 Test: blob_io_unit_compatibility ...passed 00:03:39.123 Test: blob_ext_md_pages ...passed 00:03:39.123 Test: blob_esnap_io_4096_4096 ...passed 00:03:39.123 Test: blob_esnap_io_512_512 ...passed 00:03:39.123 Test: blob_esnap_io_4096_512 ...passed 00:03:39.123 Test: blob_esnap_io_512_4096 ...passed 00:03:39.123 Suite: blob_bs_copy_extent 00:03:39.123 Test: blob_open ...passed 00:03:39.123 Test: blob_create ...[2024-05-13 05:55:47.304645] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:39.123 passed 00:03:39.123 Test: blob_create_loop ...passed 00:03:39.123 Test: blob_create_fail ...[2024-05-13 05:55:47.371542] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:39.123 passed 00:03:39.123 Test: blob_create_internal ...passed 00:03:39.381 Test: blob_create_zero_extent ...passed 00:03:39.381 Test: blob_snapshot ...passed 00:03:39.381 Test: blob_clone ...passed 00:03:39.381 Test: blob_inflate ...[2024-05-13 05:55:47.514213] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:39.381 passed 00:03:39.381 Test: blob_delete ...passed 00:03:39.381 Test: blob_resize_test ...[2024-05-13 05:55:47.568892] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:39.381 passed 00:03:39.381 Test: channel_ops ...passed 00:03:39.381 Test: blob_super ...passed 00:03:39.381 Test: blob_rw_verify_iov ...passed 00:03:39.381 Test: blob_unmap ...passed 00:03:39.640 Test: blob_iter ...passed 00:03:39.640 Test: blob_parse_md ...passed 00:03:39.640 Test: bs_load_pending_removal ...passed 00:03:39.640 Test: bs_unload ...[2024-05-13 05:55:47.789658] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:39.640 passed 00:03:39.640 Test: bs_usable_clusters ...passed 00:03:39.640 Test: blob_crc ...[2024-05-13 05:55:47.844908] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:39.640 [2024-05-13 05:55:47.844966] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1610:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:39.640 passed 00:03:39.640 Test: blob_flags ...passed 00:03:39.640 Test: bs_version ...passed 00:03:39.640 Test: blob_set_xattrs_test ...[2024-05-13 05:55:47.928304] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:39.640 [2024-05-13 05:55:47.928384] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6097:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:39.640 passed 00:03:39.899 Test: blob_thin_prov_alloc ...passed 00:03:39.899 Test: blob_insert_cluster_msg_test ...passed 00:03:39.899 Test: blob_thin_prov_rw ...passed 00:03:39.899 Test: blob_thin_prov_rle ...passed 00:03:39.899 Test: blob_thin_prov_rw_iov ...passed 00:03:39.899 Test: blob_snapshot_rw ...passed 00:03:39.899 Test: blob_snapshot_rw_iov ...passed 00:03:39.899 Test: blob_inflate_rw ...passed 00:03:40.158 Test: blob_snapshot_freeze_io ...passed 00:03:40.158 Test: blob_operation_split_rw ...passed 00:03:40.158 Test: blob_operation_split_rw_iov ...passed 00:03:40.158 Test: blob_simultaneous_operations ...[2024-05-13 05:55:48.334728] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:40.158 [2024-05-13 05:55:48.334795] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:40.158 [2024-05-13 05:55:48.335030] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:40.158 [2024-05-13 05:55:48.335044] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:40.158 [2024-05-13 05:55:48.336959] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:40.158 [2024-05-13 05:55:48.336981] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:40.158 [2024-05-13 05:55:48.336996] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:40.158 [2024-05-13 05:55:48.337003] /usr/home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:40.158 passed 00:03:40.158 Test: blob_persist_test ...passed 00:03:40.158 Test: blob_decouple_snapshot ...passed 00:03:40.158 Test: blob_seek_io_unit ...passed 00:03:40.158 Test: blob_nested_freezes ...passed 00:03:40.158 Suite: blob_blob_copy_extent 00:03:40.417 Test: blob_write ...passed 00:03:40.417 Test: blob_read ...passed 00:03:40.417 Test: blob_rw_verify ...passed 00:03:40.417 Test: blob_rw_verify_iov_nomem ...passed 00:03:40.417 Test: blob_rw_iov_read_only ...passed 00:03:40.417 Test: blob_xattr ...passed 00:03:40.417 Test: blob_dirty_shutdown ...passed 00:03:40.417 Test: blob_is_degraded ...passed 00:03:40.417 Suite: blob_esnap_bs_copy_extent 00:03:40.417 Test: blob_esnap_create ...passed 00:03:40.679 Test: blob_esnap_thread_add_remove ...passed 00:03:40.679 Test: blob_esnap_clone_snapshot ...passed 00:03:40.679 Test: blob_esnap_clone_inflate ...passed 00:03:40.679 Test: blob_esnap_clone_decouple ...passed 00:03:40.679 Test: blob_esnap_clone_reload ...passed 00:03:40.679 Test: blob_esnap_hotplug ...passed 00:03:40.679 00:03:40.679 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.679 suites 16 16 n/a 0 0 00:03:40.679 tests 348 348 348 0 0 00:03:40.679 asserts 92605 92605 92605 0 n/a 00:03:40.679 00:03:40.679 Elapsed time = 8.836 seconds 00:03:40.679 05:55:48 -- unit/unittest.sh@41 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:40.679 00:03:40.679 00:03:40.679 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.679 http://cunit.sourceforge.net/ 00:03:40.679 00:03:40.679 00:03:40.679 Suite: blob_bdev 00:03:40.679 Test: create_bs_dev ...passed 00:03:40.679 Test: create_bs_dev_ro ...[2024-05-13 05:55:48.899529] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:40.679 passed 00:03:40.679 Test: create_bs_dev_rw ...passed 00:03:40.679 Test: claim_bs_dev ...[2024-05-13 05:55:48.899959] /usr/home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:40.679 passed 00:03:40.679 Test: claim_bs_dev_ro ...passed 00:03:40.679 Test: deferred_destroy_refs ...passed 00:03:40.680 Test: deferred_destroy_channels ...passed 00:03:40.680 Test: deferred_destroy_threads ...passed 00:03:40.680 00:03:40.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.680 suites 1 1 n/a 0 0 00:03:40.680 tests 8 8 8 0 0 00:03:40.680 asserts 119 119 119 0 n/a 00:03:40.680 00:03:40.680 Elapsed time = 0.000 seconds 00:03:40.680 05:55:48 -- unit/unittest.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:40.680 00:03:40.680 00:03:40.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.680 http://cunit.sourceforge.net/ 00:03:40.680 00:03:40.680 00:03:40.680 Suite: tree 00:03:40.680 Test: blobfs_tree_op_test ...passed 00:03:40.680 00:03:40.680 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.680 suites 1 1 n/a 0 0 00:03:40.680 tests 1 1 1 0 0 00:03:40.680 asserts 27 27 27 0 n/a 00:03:40.680 00:03:40.680 Elapsed time = 0.000 seconds 00:03:40.680 05:55:48 -- unit/unittest.sh@43 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:40.680 00:03:40.680 00:03:40.680 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.680 http://cunit.sourceforge.net/ 00:03:40.680 00:03:40.680 00:03:40.680 Suite: blobfs_async_ut 00:03:40.680 Test: fs_init ...passed 00:03:40.680 Test: fs_open ...passed 00:03:40.680 Test: fs_create ...passed 00:03:40.680 Test: fs_truncate ...passed 00:03:40.939 Test: fs_rename ...passed 00:03:40.939 Test: fs_rw_async ...[2024-05-13 05:55:48.991908] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:40.939 passed 00:03:40.939 Test: fs_writev_readv_async ...passed 00:03:40.939 Test: tree_find_buffer_ut ...passed 00:03:40.939 Test: channel_ops ...passed 00:03:40.939 Test: channel_ops_sync ...passed 00:03:40.939 00:03:40.939 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.939 suites 1 1 n/a 0 0 00:03:40.939 tests 10 10 10 0 0 00:03:40.939 asserts 292 292 292 0 n/a 00:03:40.939 00:03:40.939 Elapsed time = 0.102 seconds 00:03:40.939 05:55:49 -- unit/unittest.sh@45 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:40.939 00:03:40.939 00:03:40.939 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.939 http://cunit.sourceforge.net/ 00:03:40.939 00:03:40.939 00:03:40.939 Suite: blobfs_sync_ut 00:03:40.939 Test: cache_read_after_write ...passed 00:03:40.939 Test: file_length ...[2024-05-13 05:55:49.089694] /usr/home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:40.939 passed 00:03:40.939 Test: append_write_to_extend_blob ...passed 00:03:40.939 Test: partial_buffer ...passed 00:03:40.939 Test: cache_write_null_buffer ...passed 00:03:40.939 Test: fs_create_sync ...passed 00:03:40.939 Test: fs_rename_sync ...passed 00:03:40.939 Test: cache_append_no_cache ...passed 00:03:40.939 Test: fs_delete_file_without_close ...passed 00:03:40.939 00:03:40.939 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.939 suites 1 1 n/a 0 0 00:03:40.939 tests 9 9 9 0 0 00:03:40.939 asserts 345 345 345 0 n/a 00:03:40.939 00:03:40.939 Elapsed time = 0.242 seconds 00:03:40.939 05:55:49 -- unit/unittest.sh@46 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:40.939 00:03:40.939 00:03:40.939 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.939 http://cunit.sourceforge.net/ 00:03:40.939 00:03:40.939 00:03:40.939 Suite: blobfs_bdev_ut 00:03:40.939 Test: spdk_blobfs_bdev_detect_test ...[2024-05-13 05:55:49.178468] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:40.939 passed 00:03:40.939 Test: spdk_blobfs_bdev_create_test ...[2024-05-13 05:55:49.178950] /usr/home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:40.939 passed 00:03:40.939 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:40.939 00:03:40.939 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.939 suites 1 1 n/a 0 0 00:03:40.939 tests 3 3 3 0 0 00:03:40.939 asserts 9 9 9 0 n/a 00:03:40.939 00:03:40.939 Elapsed time = 0.000 seconds 00:03:40.939 00:03:40.939 real 0m9.149s 00:03:40.939 user 0m9.085s 00:03:40.939 sys 0m0.185s 00:03:40.939 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.939 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:40.939 ************************************ 00:03:40.939 END TEST unittest_blob_blobfs 00:03:40.939 ************************************ 00:03:40.939 05:55:49 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:03:40.939 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.939 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.939 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:40.939 ************************************ 00:03:40.939 START TEST unittest_event 00:03:40.939 ************************************ 00:03:40.939 05:55:49 -- common/autotest_common.sh@1104 -- # unittest_event 00:03:40.939 05:55:49 -- unit/unittest.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:40.939 00:03:40.939 00:03:40.939 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.939 http://cunit.sourceforge.net/ 00:03:40.939 00:03:40.939 00:03:40.939 Suite: app_suite 00:03:40.939 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:40.939 options: 00:03:40.939 -c, --config JSON config file (default none) 00:03:40.939 --json JSON config file (default none) 00:03:40.939 --json-ignore-init-errors 00:03:40.939 don't exit on invalid config entry 00:03:40.939 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:40.939 -g, --single-file-segments 00:03:40.939 app_ut: invalid option -- z 00:03:40.939 force creating just one hugetlbfs file 00:03:40.939 -h, --help show this usage 00:03:40.939 -i, --shm-id shared memory ID (optional) 00:03:40.939 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:40.939 --lcores lcore to CPU mapping list. The list is in the format: 00:03:40.939 [<,lcores[@CPUs]>...] 00:03:40.939 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:40.939 Within the group, '-' is used for range separator, 00:03:40.939 ',' is used for single number separator. 00:03:40.939 '( )' can be omitted for single element group, 00:03:40.939 '@' can be omitted if cpus and lcores have the same value 00:03:40.939 -n, --mem-channels channel number of memory channels used for DPDK 00:03:40.939 -p, --main-core main (primary) core for DPDK 00:03:40.939 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:40.939 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:40.939 --disable-cpumask-locks Disable CPU core lock files. 00:03:40.939 --silence-noticelog disable notice level logging to stderr 00:03:40.939 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:40.939 -u, --no-pci disable PCI access 00:03:40.939 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:40.939 --max-delay maximum reactor delay (in microseconds) 00:03:40.939 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:40.939 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:40.939 -R, --huge-unlink unlink huge files after initialization 00:03:40.939 -v, --version print SPDK version 00:03:40.939 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:40.939 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:40.939 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:40.939 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:40.939 Tracepoints vary in size and can use more than one trace entry. 00:03:40.939 --rpcs-allowed comma-separated list of permitted RPCS 00:03:40.939 --env-context Opaque context for use of the env implementation 00:03:40.939 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:40.939 --no-huge run without using hugepages 00:03:40.939 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:03:40.939 -e, --tpoint-group [:] 00:03:40.939 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:40.939 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:40.939 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:40.939 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:40.939 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:40.939 app_ut [options] 00:03:40.939 options: 00:03:40.939 -c, --config JSON config file (default none) 00:03:40.939 --json JSON config file (default none) 00:03:40.939 --json-ignore-init-errors 00:03:40.939 don't exit on invalid config entry 00:03:40.939 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:40.939 -g, --single-file-segments 00:03:40.939 force creating just one hugetlbfs file 00:03:40.939 -h, --help show this usage 00:03:40.939 -i, --shm-id shared memory ID (optional) 00:03:40.939 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:40.939 app_ut: unrecognized option `--test-long-opt' 00:03:40.939 --lcores lcore to CPU mapping list. The list is in the format: 00:03:40.939 [<,lcores[@CPUs]>...] 00:03:40.939 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:40.939 Within the group, '-' is used for range separator, 00:03:40.940 ',' is used for single number separator. 00:03:40.940 '( )' can be omitted for single element group, 00:03:40.940 '@' can be omitted if cpus and lcores have the same value 00:03:40.940 -n, --mem-channels channel number of memory channels used for DPDK 00:03:40.940 -p, --main-core main (primary) core for DPDK 00:03:40.940 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:40.940 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:40.940 --disable-cpumask-locks Disable CPU core lock files. 00:03:40.940 --silence-noticelog disable notice level logging to stderr 00:03:40.940 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:40.940 -u, --no-pci disable PCI access 00:03:40.940 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:40.940 --max-delay maximum reactor delay (in microseconds) 00:03:40.940 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:40.940 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:40.940 -R, --huge-unlink unlink huge files after initialization 00:03:40.940 -v, --version print SPDK version 00:03:40.940 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:40.940 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:40.940 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:40.940 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:40.940 Tracepoints vary in size and can use more than one trace entry. 00:03:40.940 --rpcs-allowed comma-separated list of permitted RPCS 00:03:40.940 --env-context Opaque context for use of the env implementation 00:03:40.940 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:40.940 --no-huge run without using hugepages 00:03:40.940 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:03:40.940 -e, --tpoint-group [:] 00:03:40.940 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:40.940 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:40.940 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:40.940 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:40.940 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:40.940 [2024-05-13 05:55:49.238290] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1031:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:40.940 app_ut [options] 00:03:40.940 options: 00:03:40.940 -c, --config JSON config file (default none) 00:03:40.940 --json JSON config file (default none) 00:03:40.940 --json-ignore-init-errors 00:03:40.940 don't exit on invalid config entry 00:03:40.940 [2024-05-13 05:55:49.238782] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:40.940 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:40.940 -g, --single-file-segments 00:03:40.940 force creating just one hugetlbfs file 00:03:40.940 -h, --help show this usage 00:03:40.940 -i, --shm-id shared memory ID (optional) 00:03:40.940 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:03:40.940 --lcores lcore to CPU mapping list. The list is in the format: 00:03:40.940 [<,lcores[@CPUs]>...] 00:03:40.940 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:40.940 Within the group, '-' is used for range separator, 00:03:40.940 ',' is used for single number separator. 00:03:40.940 '( )' can be omitted for single element group, 00:03:40.940 '@' can be omitted if cpus and lcores have the same value 00:03:40.940 -n, --mem-channels channel number of memory channels used for DPDK 00:03:40.940 -p, --main-core main (primary) core for DPDK 00:03:40.940 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:40.940 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:40.940 --disable-cpumask-locks Disable CPU core lock files. 00:03:40.940 --silence-noticelog disable notice level logging to stderr 00:03:40.940 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:40.940 -u, --no-pci disable PCI access 00:03:40.940 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:40.940 --max-delay maximum reactor delay (in microseconds) 00:03:40.940 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:40.940 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:40.940 -R, --huge-unlink unlink huge files after initialization 00:03:40.940 -v, --version print SPDK version 00:03:40.940 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:40.940 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:40.940 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:40.940 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:03:40.940 Tracepoints vary in size and can use more than one trace entry. 00:03:40.940 --rpcs-allowed comma-separated list of permitted RPCS 00:03:40.940 --env-context Opaque context for use of the env implementation 00:03:40.940 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:40.940 --no-huge run without using hugepages 00:03:40.940 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:03:40.940 -e, --tpoint-group [:] 00:03:40.940 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:03:40.940 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:03:40.940 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:03:40.940 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:03:40.940 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:03:40.940 [2024-05-13 05:55:49.239020] /usr/home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:40.940 passed 00:03:40.940 00:03:40.940 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.940 suites 1 1 n/a 0 0 00:03:40.940 tests 1 1 1 0 0 00:03:40.940 asserts 8 8 8 0 n/a 00:03:40.940 00:03:40.940 Elapsed time = 0.000 seconds 00:03:40.940 05:55:49 -- unit/unittest.sh@51 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:40.940 00:03:40.940 00:03:40.940 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.940 http://cunit.sourceforge.net/ 00:03:40.940 00:03:40.940 00:03:40.940 Suite: app_suite 00:03:40.940 Test: test_create_reactor ...passed 00:03:40.940 Test: test_init_reactors ...passed 00:03:40.940 Test: test_event_call ...passed 00:03:40.940 Test: test_schedule_thread ...passed 00:03:40.940 Test: test_reschedule_thread ...passed 00:03:41.200 Test: test_bind_thread ...passed 00:03:41.200 Test: test_for_each_reactor ...passed 00:03:41.200 Test: test_reactor_stats ...passed 00:03:41.200 Test: test_scheduler ...passed 00:03:41.200 Test: test_governor ...passed 00:03:41.200 00:03:41.200 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.200 suites 1 1 n/a 0 0 00:03:41.200 tests 10 10 10 0 0 00:03:41.200 asserts 336 336 336 0 n/a 00:03:41.200 00:03:41.200 Elapsed time = 0.000 seconds 00:03:41.200 00:03:41.200 real 0m0.023s 00:03:41.200 user 0m0.006s 00:03:41.200 sys 0m0.016s 00:03:41.200 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 END TEST unittest_event 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- unit/unittest.sh@233 -- # uname -s 00:03:41.200 05:55:49 -- unit/unittest.sh@233 -- # '[' FreeBSD = Linux ']' 00:03:41.200 05:55:49 -- unit/unittest.sh@237 -- # run_test unittest_accel /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:41.200 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.200 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 START TEST unittest_accel 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:41.200 00:03:41.200 00:03:41.200 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.200 http://cunit.sourceforge.net/ 00:03:41.200 00:03:41.200 00:03:41.200 Suite: accel_sequence 00:03:41.200 Test: test_sequence_fill_copy ...passed 00:03:41.200 Test: test_sequence_abort ...passed 00:03:41.200 Test: test_sequence_append_error ...passed 00:03:41.200 Test: test_sequence_completion_error ...[2024-05-13 05:55:49.322183] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82cbaea80 00:03:41.200 [2024-05-13 05:55:49.322569] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1927:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x82cbaea80 00:03:41.200 [2024-05-13 05:55:49.322610] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x82cbaea80 00:03:41.200 [2024-05-13 05:55:49.322630] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1837:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x82cbaea80 00:03:41.200 passed 00:03:41.200 Test: test_sequence_decompress ...passed 00:03:41.200 Test: test_sequence_reverse ...passed 00:03:41.200 Test: test_sequence_copy_elision ...passed 00:03:41.200 Test: test_sequence_accel_buffers ...passed 00:03:41.200 Test: test_sequence_memory_domain ...[2024-05-13 05:55:49.324796] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1729:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:41.200 [2024-05-13 05:55:49.324875] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1768:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:41.200 passed 00:03:41.200 Test: test_sequence_module_memory_domain ...passed 00:03:41.200 Test: test_sequence_crypto ...passed 00:03:41.200 Test: test_sequence_driver ...[2024-05-13 05:55:49.326083] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1876:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x82cbae480 using driver: ut 00:03:41.200 [2024-05-13 05:55:49.326157] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1941:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x82cbae480 through driver: ut 00:03:41.200 passed 00:03:41.200 Test: test_sequence_same_iovs ...passed 00:03:41.200 Test: test_sequence_crc32 ...passed 00:03:41.200 Suite: accel 00:03:41.200 Test: test_spdk_accel_task_complete ...passed 00:03:41.200 Test: test_get_task ...passed 00:03:41.200 Test: test_spdk_accel_submit_copy ...passed 00:03:41.200 Test: test_spdk_accel_submit_dualcast ...[2024-05-13 05:55:49.326977] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:41.200 [2024-05-13 05:55:49.327005] /usr/home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:41.200 passed 00:03:41.200 Test: test_spdk_accel_submit_compare ...passed 00:03:41.200 Test: test_spdk_accel_submit_fill ...passed 00:03:41.200 Test: test_spdk_accel_submit_crc32c ...passed 00:03:41.200 Test: test_spdk_accel_submit_crc32cv ...passed 00:03:41.200 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:41.200 Test: test_spdk_accel_submit_xor ...passed 00:03:41.200 Test: test_spdk_accel_module_find_by_name ...passed 00:03:41.200 Test: test_spdk_accel_module_register ...passed 00:03:41.200 00:03:41.200 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.200 suites 2 2 n/a 0 0 00:03:41.200 tests 26 26 26 0 0 00:03:41.200 asserts 831 831 831 0 n/a 00:03:41.200 00:03:41.200 Elapsed time = 0.016 seconds 00:03:41.200 00:03:41.200 real 0m0.019s 00:03:41.200 user 0m0.008s 00:03:41.200 sys 0m0.016s 00:03:41.200 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 END TEST unittest_accel 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- unit/unittest.sh@238 -- # run_test unittest_ioat /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:41.200 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.200 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 START TEST unittest_ioat 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:41.200 00:03:41.200 00:03:41.200 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.200 http://cunit.sourceforge.net/ 00:03:41.200 00:03:41.200 00:03:41.200 Suite: ioat 00:03:41.200 Test: ioat_state_check ...passed 00:03:41.200 00:03:41.200 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.200 suites 1 1 n/a 0 0 00:03:41.200 tests 1 1 1 0 0 00:03:41.200 asserts 32 32 32 0 n/a 00:03:41.200 00:03:41.200 Elapsed time = 0.000 seconds 00:03:41.200 00:03:41.200 real 0m0.005s 00:03:41.200 user 0m0.004s 00:03:41.200 sys 0m0.004s 00:03:41.200 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 END TEST unittest_ioat 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:41.200 05:55:49 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:41.200 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.200 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 START TEST unittest_idxd_user 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:41.200 00:03:41.200 00:03:41.200 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.200 http://cunit.sourceforge.net/ 00:03:41.200 00:03:41.200 00:03:41.200 Suite: idxd_user 00:03:41.200 Test: test_idxd_wait_cmd ...[2024-05-13 05:55:49.425361] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:41.200 passed 00:03:41.200 Test: test_idxd_reset_dev ...[2024-05-13 05:55:49.425735] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:41.200 [2024-05-13 05:55:49.425790] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:41.200 passed 00:03:41.200 Test: test_idxd_group_config ...passed 00:03:41.200 Test: test_idxd_wq_config ...passed 00:03:41.200 00:03:41.200 [2024-05-13 05:55:49.425813] /usr/home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:41.200 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.200 suites 1 1 n/a 0 0 00:03:41.200 tests 4 4 4 0 0 00:03:41.200 asserts 20 20 20 0 n/a 00:03:41.200 00:03:41.200 Elapsed time = 0.000 seconds 00:03:41.200 00:03:41.200 real 0m0.009s 00:03:41.200 user 0m0.008s 00:03:41.200 sys 0m0.000s 00:03:41.200 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 END TEST unittest_idxd_user 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:03:41.200 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.200 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.200 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.200 ************************************ 00:03:41.200 START TEST unittest_iscsi 00:03:41.200 ************************************ 00:03:41.200 05:55:49 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:03:41.200 05:55:49 -- unit/unittest.sh@66 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:41.200 00:03:41.200 00:03:41.200 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.200 http://cunit.sourceforge.net/ 00:03:41.200 00:03:41.200 00:03:41.200 Suite: conn_suite 00:03:41.200 Test: read_task_split_in_order_case ...passed 00:03:41.200 Test: read_task_split_reverse_order_case ...passed 00:03:41.200 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:41.201 Test: process_non_read_task_completion_test ...passed 00:03:41.201 Test: free_tasks_on_connection ...passed 00:03:41.201 Test: free_tasks_with_queued_datain ...passed 00:03:41.201 Test: abort_queued_datain_task_test ...passed 00:03:41.201 Test: abort_queued_datain_tasks_test ...passed 00:03:41.201 00:03:41.201 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.201 suites 1 1 n/a 0 0 00:03:41.201 tests 8 8 8 0 0 00:03:41.201 asserts 230 230 230 0 n/a 00:03:41.201 00:03:41.201 Elapsed time = 0.000 seconds 00:03:41.201 05:55:49 -- unit/unittest.sh@67 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:41.201 00:03:41.201 00:03:41.201 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.201 http://cunit.sourceforge.net/ 00:03:41.201 00:03:41.201 00:03:41.201 Suite: iscsi_suite 00:03:41.201 Test: param_negotiation_test ...passed 00:03:41.201 Test: list_negotiation_test ...passed 00:03:41.201 Test: parse_valid_test ...passed 00:03:41.201 Test: parse_invalid_test ...[2024-05-13 05:55:49.488468] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:03:41.201 [2024-05-13 05:55:49.488836] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:03:41.201 [2024-05-13 05:55:49.488896] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:03:41.201 [2024-05-13 05:55:49.488985] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:41.201 [2024-05-13 05:55:49.489041] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:41.201 [2024-05-13 05:55:49.489081] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:41.201 passed 00:03:41.201 00:03:41.201 [2024-05-13 05:55:49.489120] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:41.201 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.201 suites 1 1 n/a 0 0 00:03:41.201 tests 4 4 4 0 0 00:03:41.201 asserts 161 161 161 0 n/a 00:03:41.201 00:03:41.201 Elapsed time = 0.000 seconds 00:03:41.201 05:55:49 -- unit/unittest.sh@68 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:41.201 00:03:41.201 00:03:41.201 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.201 http://cunit.sourceforge.net/ 00:03:41.201 00:03:41.201 00:03:41.201 Suite: iscsi_target_node_suite 00:03:41.201 Test: add_lun_test_cases ...[2024-05-13 05:55:49.497673] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1249:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:41.201 [2024-05-13 05:55:49.498001] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:41.201 [2024-05-13 05:55:49.498048] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:41.201 [2024-05-13 05:55:49.498069] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:41.201 passed 00:03:41.201 Test: allow_any_allowed ...passed 00:03:41.201 Test: allow_ipv6_allowed ...[2024-05-13 05:55:49.498087] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:41.201 passed 00:03:41.201 Test: allow_ipv6_denied ...passed 00:03:41.201 Test: allow_ipv6_invalid ...passed 00:03:41.201 Test: allow_ipv4_allowed ...passed 00:03:41.201 Test: allow_ipv4_denied ...passed 00:03:41.201 Test: allow_ipv4_invalid ...passed 00:03:41.201 Test: node_access_allowed ...passed 00:03:41.201 Test: node_access_denied_by_empty_netmask ...passed 00:03:41.201 Test: node_access_multi_initiator_groups_cases ...passed 00:03:41.201 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:41.201 Test: chap_param_test_cases ...[2024-05-13 05:55:49.498268] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:41.201 [2024-05-13 05:55:49.498296] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:41.201 [2024-05-13 05:55:49.498316] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:41.201 [2024-05-13 05:55:49.498335] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1036:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:41.201 [2024-05-13 05:55:49.498354] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:41.201 passed 00:03:41.201 00:03:41.201 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.201 suites 1 1 n/a 0 0 00:03:41.201 tests 13 13 13 0 0 00:03:41.201 asserts 50 50 50 0 n/a 00:03:41.201 00:03:41.201 Elapsed time = 0.000 seconds 00:03:41.201 05:55:49 -- unit/unittest.sh@69 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:41.201 00:03:41.201 00:03:41.201 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.201 http://cunit.sourceforge.net/ 00:03:41.201 00:03:41.201 00:03:41.201 Suite: iscsi_suite 00:03:41.201 Test: op_login_check_target_test ...[2024-05-13 05:55:49.509778] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:03:41.201 passed 00:03:41.201 Test: op_login_session_normal_test ...[2024-05-13 05:55:49.510244] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:41.201 [2024-05-13 05:55:49.510283] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:41.201 [2024-05-13 05:55:49.510311] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:41.201 [2024-05-13 05:55:49.510404] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:41.460 [2024-05-13 05:55:49.510445] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:41.460 [2024-05-13 05:55:49.510517] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:41.460 [2024-05-13 05:55:49.510545] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1470:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:41.460 passed 00:03:41.460 Test: maxburstlength_test ...[2024-05-13 05:55:49.510681] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:41.461 [2024-05-13 05:55:49.510721] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4551:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:41.461 passed 00:03:41.461 Test: underflow_for_read_transfer_test ...passed 00:03:41.461 Test: underflow_for_zero_read_transfer_test ...passed 00:03:41.461 Test: underflow_for_request_sense_test ...passed 00:03:41.461 Test: underflow_for_check_condition_test ...passed 00:03:41.461 Test: add_transfer_task_test ...passed 00:03:41.461 Test: get_transfer_task_test ...passed 00:03:41.461 Test: del_transfer_task_test ...passed 00:03:41.461 Test: clear_all_transfer_tasks_test ...passed 00:03:41.461 Test: build_iovs_test ...passed 00:03:41.461 Test: build_iovs_with_md_test ...passed 00:03:41.461 Test: pdu_hdr_op_login_test ...[2024-05-13 05:55:49.511146] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:41.461 [2024-05-13 05:55:49.511191] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1259:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:41.461 [2024-05-13 05:55:49.511243] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:41.461 passed 00:03:41.461 Test: pdu_hdr_op_text_test ...[2024-05-13 05:55:49.511278] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2241:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:41.461 passed 00:03:41.461 Test: pdu_hdr_op_logout_test ...[2024-05-13 05:55:49.511304] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:41.461 [2024-05-13 05:55:49.511331] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2286:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:41.461 passed 00:03:41.461 Test: pdu_hdr_op_scsi_test ...[2024-05-13 05:55:49.511363] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2517:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:41.461 [2024-05-13 05:55:49.511397] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:41.461 [2024-05-13 05:55:49.511448] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:41.461 [2024-05-13 05:55:49.511495] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:41.461 [2024-05-13 05:55:49.511554] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3398:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:41.461 [2024-05-13 05:55:49.511584] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3405:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:41.461 [2024-05-13 05:55:49.511633] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:41.461 passed 00:03:41.461 Test: pdu_hdr_op_task_mgmt_test ...[2024-05-13 05:55:49.511664] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:41.461 [2024-05-13 05:55:49.511714] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:41.461 passed 00:03:41.461 Test: pdu_hdr_op_nopout_test ...[2024-05-13 05:55:49.511770] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:41.461 [2024-05-13 05:55:49.511799] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:41.461 [2024-05-13 05:55:49.511842] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:41.461 [2024-05-13 05:55:49.511867] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:41.461 passed 00:03:41.461 Test: pdu_hdr_op_data_test ...[2024-05-13 05:55:49.511916] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:41.461 [2024-05-13 05:55:49.511943] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:41.461 [2024-05-13 05:55:49.511969] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:41.461 [2024-05-13 05:55:49.512019] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:41.461 [2024-05-13 05:55:49.512064] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:41.461 [2024-05-13 05:55:49.512090] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:41.461 [2024-05-13 05:55:49.512135] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4245:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:41.461 passed 00:03:41.461 Test: empty_text_with_cbit_test ...passed 00:03:41.461 Test: pdu_payload_read_test ...[2024-05-13 05:55:49.512812] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4632:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:41.461 passed 00:03:41.461 Test: data_out_pdu_sequence_test ...passed 00:03:41.461 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:41.461 00:03:41.461 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.461 suites 1 1 n/a 0 0 00:03:41.461 tests 24 24 24 0 0 00:03:41.461 asserts 150253 150253 150253 0 n/a 00:03:41.461 00:03:41.461 Elapsed time = 0.000 seconds 00:03:41.461 05:55:49 -- unit/unittest.sh@70 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:41.461 00:03:41.461 00:03:41.461 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.461 http://cunit.sourceforge.net/ 00:03:41.461 00:03:41.461 00:03:41.461 Suite: init_grp_suite 00:03:41.461 Test: create_initiator_group_success_case ...passed 00:03:41.461 Test: find_initiator_group_success_case ...passed 00:03:41.461 Test: register_initiator_group_twice_case ...passed 00:03:41.461 Test: add_initiator_name_success_case ...passed 00:03:41.461 Test: add_initiator_name_fail_case ...passed 00:03:41.461 Test: delete_all_initiator_names_success_case ...passed 00:03:41.461 Test: add_netmask_success_case ...passed 00:03:41.461 Test: add_netmask_fail_case ...passed 00:03:41.461 Test: delete_all_netmasks_success_case ...passed 00:03:41.461 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:41.461 Test: netmask_overwrite_all_to_any_case ...passed 00:03:41.461 Test: add_delete_initiator_names_case ...passed 00:03:41.461 Test: add_duplicated_initiator_names_case ...passed 00:03:41.461 Test: delete_nonexisting_initiator_names_case ...passed 00:03:41.461 Test: add_delete_netmasks_case ...passed 00:03:41.461 Test: add_duplicated_netmasks_case ...passed 00:03:41.461 Test: delete_nonexisting_netmasks_case ...passed 00:03:41.461 00:03:41.461 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.461 suites 1 1 n/a 0 0 00:03:41.461 tests 17 17 17 0 0 00:03:41.461 asserts 108 108 108 0 n/a 00:03:41.461 00:03:41.461 Elapsed time = 0.000 seconds 00:03:41.461 [2024-05-13 05:55:49.521451] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:41.461 [2024-05-13 05:55:49.521592] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:41.461 05:55:49 -- unit/unittest.sh@71 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:41.461 00:03:41.461 00:03:41.461 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.461 http://cunit.sourceforge.net/ 00:03:41.461 00:03:41.461 00:03:41.461 Suite: portal_grp_suite 00:03:41.461 Test: portal_create_ipv4_normal_case ...passed 00:03:41.461 Test: portal_create_ipv6_normal_case ...passed 00:03:41.461 Test: portal_create_ipv4_wildcard_case ...passed 00:03:41.461 Test: portal_create_ipv6_wildcard_case ...passed 00:03:41.461 Test: portal_create_twice_case ...[2024-05-13 05:55:49.527611] /usr/home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:41.461 passed 00:03:41.461 Test: portal_grp_register_unregister_case ...passed 00:03:41.461 Test: portal_grp_register_twice_case ...passed 00:03:41.461 Test: portal_grp_add_delete_case ...passed 00:03:41.461 Test: portal_grp_add_delete_twice_case ...passed 00:03:41.461 00:03:41.461 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.461 suites 1 1 n/a 0 0 00:03:41.461 tests 9 9 9 0 0 00:03:41.461 asserts 44 44 44 0 n/a 00:03:41.461 00:03:41.461 Elapsed time = 0.000 seconds 00:03:41.461 00:03:41.461 real 0m0.058s 00:03:41.461 user 0m0.007s 00:03:41.461 sys 0m0.050s 00:03:41.461 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.461 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.461 ************************************ 00:03:41.461 END TEST unittest_iscsi 00:03:41.461 ************************************ 00:03:41.461 05:55:49 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:03:41.461 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.461 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.461 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.461 ************************************ 00:03:41.461 START TEST unittest_json 00:03:41.461 ************************************ 00:03:41.461 05:55:49 -- common/autotest_common.sh@1104 -- # unittest_json 00:03:41.461 05:55:49 -- unit/unittest.sh@75 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:41.461 00:03:41.461 00:03:41.461 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.461 http://cunit.sourceforge.net/ 00:03:41.461 00:03:41.461 00:03:41.461 Suite: json 00:03:41.461 Test: test_parse_literal ...passed 00:03:41.461 Test: test_parse_string_simple ...passed 00:03:41.461 Test: test_parse_string_control_chars ...passed 00:03:41.461 Test: test_parse_string_utf8 ...passed 00:03:41.461 Test: test_parse_string_escapes_twochar ...passed 00:03:41.461 Test: test_parse_string_escapes_unicode ...passed 00:03:41.461 Test: test_parse_number ...passed 00:03:41.461 Test: test_parse_array ...passed 00:03:41.461 Test: test_parse_object ...passed 00:03:41.461 Test: test_parse_nesting ...passed 00:03:41.461 Test: test_parse_comment ...passed 00:03:41.461 00:03:41.461 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.461 suites 1 1 n/a 0 0 00:03:41.462 tests 11 11 11 0 0 00:03:41.462 asserts 1516 1516 1516 0 n/a 00:03:41.462 00:03:41.462 Elapsed time = 0.000 seconds 00:03:41.462 05:55:49 -- unit/unittest.sh@76 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:41.462 00:03:41.462 00:03:41.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.462 http://cunit.sourceforge.net/ 00:03:41.462 00:03:41.462 00:03:41.462 Suite: json 00:03:41.462 Test: test_strequal ...passed 00:03:41.462 Test: test_num_to_uint16 ...passed 00:03:41.462 Test: test_num_to_int32 ...passed 00:03:41.462 Test: test_num_to_uint64 ...passed 00:03:41.462 Test: test_decode_object ...passed 00:03:41.462 Test: test_decode_array ...passed 00:03:41.462 Test: test_decode_bool ...passed 00:03:41.462 Test: test_decode_uint16 ...passed 00:03:41.462 Test: test_decode_int32 ...passed 00:03:41.462 Test: test_decode_uint32 ...passed 00:03:41.462 Test: test_decode_uint64 ...passed 00:03:41.462 Test: test_decode_string ...passed 00:03:41.462 Test: test_decode_uuid ...passed 00:03:41.462 Test: test_find ...passed 00:03:41.462 Test: test_find_array ...passed 00:03:41.462 Test: test_iterating ...passed 00:03:41.462 Test: test_free_object ...passed 00:03:41.462 00:03:41.462 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.462 suites 1 1 n/a 0 0 00:03:41.462 tests 17 17 17 0 0 00:03:41.462 asserts 236 236 236 0 n/a 00:03:41.462 00:03:41.462 Elapsed time = 0.000 seconds 00:03:41.462 05:55:49 -- unit/unittest.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:41.462 00:03:41.462 00:03:41.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.462 http://cunit.sourceforge.net/ 00:03:41.462 00:03:41.462 00:03:41.462 Suite: json 00:03:41.462 Test: test_write_literal ...passed 00:03:41.462 Test: test_write_string_simple ...passed 00:03:41.462 Test: test_write_string_escapes ...passed 00:03:41.462 Test: test_write_string_utf16le ...passed 00:03:41.462 Test: test_write_number_int32 ...passed 00:03:41.462 Test: test_write_number_uint32 ...passed 00:03:41.462 Test: test_write_number_uint128 ...passed 00:03:41.462 Test: test_write_string_number_uint128 ...passed 00:03:41.462 Test: test_write_number_int64 ...passed 00:03:41.462 Test: test_write_number_uint64 ...passed 00:03:41.462 Test: test_write_number_double ...passed 00:03:41.462 Test: test_write_uuid ...passed 00:03:41.462 Test: test_write_array ...passed 00:03:41.462 Test: test_write_object ...passed 00:03:41.462 Test: test_write_nesting ...passed 00:03:41.462 Test: test_write_val ...passed 00:03:41.462 00:03:41.462 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.462 suites 1 1 n/a 0 0 00:03:41.462 tests 16 16 16 0 0 00:03:41.462 asserts 918 918 918 0 n/a 00:03:41.462 00:03:41.462 Elapsed time = 0.000 seconds 00:03:41.462 05:55:49 -- unit/unittest.sh@78 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:41.462 00:03:41.462 00:03:41.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.462 http://cunit.sourceforge.net/ 00:03:41.462 00:03:41.462 00:03:41.462 Suite: jsonrpc 00:03:41.462 Test: test_parse_request ...passed 00:03:41.462 Test: test_parse_request_streaming ...passed 00:03:41.462 00:03:41.462 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.462 suites 1 1 n/a 0 0 00:03:41.462 tests 2 2 2 0 0 00:03:41.462 asserts 289 289 289 0 n/a 00:03:41.462 00:03:41.462 Elapsed time = 0.008 seconds 00:03:41.462 ************************************ 00:03:41.462 END TEST unittest_json 00:03:41.462 ************************************ 00:03:41.462 00:03:41.462 real 0m0.034s 00:03:41.462 user 0m0.012s 00:03:41.462 sys 0m0.024s 00:03:41.462 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.462 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.462 05:55:49 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:03:41.462 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.462 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.462 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.462 ************************************ 00:03:41.462 START TEST unittest_rpc 00:03:41.462 ************************************ 00:03:41.462 05:55:49 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:03:41.462 05:55:49 -- unit/unittest.sh@82 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:41.462 00:03:41.462 00:03:41.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.462 http://cunit.sourceforge.net/ 00:03:41.462 00:03:41.462 00:03:41.462 Suite: rpc 00:03:41.462 Test: test_jsonrpc_handler ...passed 00:03:41.462 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:41.462 Test: test_rpc_get_methods ...[2024-05-13 05:55:49.655871] /usr/home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:41.462 passed 00:03:41.462 Test: test_rpc_spdk_get_version ...passed 00:03:41.462 Test: test_spdk_rpc_listen_close ...passed 00:03:41.462 00:03:41.462 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.462 suites 1 1 n/a 0 0 00:03:41.462 tests 5 5 5 0 0 00:03:41.462 asserts 20 20 20 0 n/a 00:03:41.462 00:03:41.462 Elapsed time = 0.000 seconds 00:03:41.462 00:03:41.462 real 0m0.009s 00:03:41.462 user 0m0.008s 00:03:41.462 sys 0m0.001s 00:03:41.462 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.462 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.462 ************************************ 00:03:41.462 END TEST unittest_rpc 00:03:41.462 ************************************ 00:03:41.462 05:55:49 -- unit/unittest.sh@245 -- # run_test unittest_notify /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:41.462 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.462 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.462 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.462 ************************************ 00:03:41.462 START TEST unittest_notify 00:03:41.462 ************************************ 00:03:41.462 05:55:49 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:41.462 00:03:41.462 00:03:41.462 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.462 http://cunit.sourceforge.net/ 00:03:41.462 00:03:41.462 00:03:41.462 Suite: app_suite 00:03:41.462 Test: notify ...passed 00:03:41.462 00:03:41.462 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.462 suites 1 1 n/a 0 0 00:03:41.462 tests 1 1 1 0 0 00:03:41.462 asserts 13 13 13 0 n/a 00:03:41.462 00:03:41.462 Elapsed time = 0.000 seconds 00:03:41.462 00:03:41.462 real 0m0.008s 00:03:41.462 user 0m0.006s 00:03:41.462 sys 0m0.010s 00:03:41.462 05:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.462 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.462 ************************************ 00:03:41.462 END TEST unittest_notify 00:03:41.462 ************************************ 00:03:41.722 05:55:49 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:03:41.722 05:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.722 05:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.722 05:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:41.722 ************************************ 00:03:41.722 START TEST unittest_nvme 00:03:41.722 ************************************ 00:03:41.722 05:55:49 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:03:41.722 05:55:49 -- unit/unittest.sh@86 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:41.722 00:03:41.722 00:03:41.722 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.722 http://cunit.sourceforge.net/ 00:03:41.722 00:03:41.722 00:03:41.722 Suite: nvme 00:03:41.722 Test: test_opc_data_transfer ...passed 00:03:41.722 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:41.722 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:41.722 Test: test_trid_parse_and_compare ...[2024-05-13 05:55:49.786288] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:41.722 [2024-05-13 05:55:49.786678] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:41.722 [2024-05-13 05:55:49.786731] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1180:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:41.722 [2024-05-13 05:55:49.786754] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:41.722 [2024-05-13 05:55:49.786774] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:03:41.722 [2024-05-13 05:55:49.786793] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:41.722 passed 00:03:41.722 Test: test_trid_trtype_str ...passed 00:03:41.722 Test: test_trid_adrfam_str ...passed 00:03:41.722 Test: test_nvme_ctrlr_probe ...[2024-05-13 05:55:49.786988] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:41.722 passed 00:03:41.722 Test: test_spdk_nvme_probe ...[2024-05-13 05:55:49.787028] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:41.722 [2024-05-13 05:55:49.787048] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:41.722 [2024-05-13 05:55:49.787070] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 813:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:41.722 passed 00:03:41.722 Test: test_spdk_nvme_connect ...[2024-05-13 05:55:49.787089] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:41.722 [2024-05-13 05:55:49.787125] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:41.722 [2024-05-13 05:55:49.787215] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:41.722 passed 00:03:41.722 Test: test_nvme_ctrlr_probe_internal ...[2024-05-13 05:55:49.787237] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:03:41.722 [2024-05-13 05:55:49.787274] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:41.722 [2024-05-13 05:55:49.787294] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:41.722 passed 00:03:41.722 Test: test_nvme_init_controllers ...passed 00:03:41.722 Test: test_nvme_driver_init ...[2024-05-13 05:55:49.787319] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:41.722 [2024-05-13 05:55:49.787350] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:41.722 [2024-05-13 05:55:49.787375] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:41.722 passed 00:03:41.722 Test: test_spdk_nvme_detach ...passed 00:03:41.722 Test: test_nvme_completion_poll_cb ...passed 00:03:41.722 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:41.722 Test: test_nvme_allocate_request_null ...passed 00:03:41.722 Test: test_nvme_allocate_request ...passed 00:03:41.722 Test: test_nvme_free_request ...passed 00:03:41.722 Test: test_nvme_allocate_request_user_copy ...passed 00:03:41.722 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:41.722 Test: test_nvme_request_check_timeout ...[2024-05-13 05:55:49.904803] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:41.722 passed 00:03:41.722 Test: test_nvme_wait_for_completion ...passed 00:03:41.722 Test: test_spdk_nvme_parse_func ...passed 00:03:41.722 Test: test_spdk_nvme_detach_async ...passed 00:03:41.722 Test: test_nvme_parse_addr ...[2024-05-13 05:55:49.905190] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:41.722 passed 00:03:41.722 00:03:41.722 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.722 suites 1 1 n/a 0 0 00:03:41.722 tests 25 25 25 0 0 00:03:41.722 asserts 326 326 326 0 n/a 00:03:41.722 00:03:41.722 Elapsed time = 0.008 seconds 00:03:41.722 05:55:49 -- unit/unittest.sh@87 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:41.722 00:03:41.722 00:03:41.722 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.722 http://cunit.sourceforge.net/ 00:03:41.722 00:03:41.722 00:03:41.722 Suite: nvme_ctrlr 00:03:41.722 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-05-13 05:55:49.911962] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 passed 00:03:41.722 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-05-13 05:55:49.913402] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 passed 00:03:41.722 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-05-13 05:55:49.914672] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 passed 00:03:41.722 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-05-13 05:55:49.915933] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 passed 00:03:41.722 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-05-13 05:55:49.917157] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 [2024-05-13 05:55:49.918422] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-13 05:55:49.919763] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-13 05:55:49.921081] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:41.722 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-05-13 05:55:49.923679] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 [2024-05-13 05:55:49.926239] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-13 05:55:49.927567] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:41.722 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-05-13 05:55:49.930203] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 [2024-05-13 05:55:49.931513] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-05-13 05:55:49.934054] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:41.722 Test: test_nvme_ctrlr_init_delay ...[2024-05-13 05:55:49.936663] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 passed 00:03:41.722 Test: test_alloc_io_qpair_rr_1 ...[2024-05-13 05:55:49.938043] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.722 [2024-05-13 05:55:49.938143] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:41.722 [2024-05-13 05:55:49.938184] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:41.722 [2024-05-13 05:55:49.938209] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:41.722 [2024-05-13 05:55:49.938230] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:41.722 passed 00:03:41.722 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:41.722 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:41.723 Test: test_alloc_io_qpair_wrr_1 ...[2024-05-13 05:55:49.938552] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-05-13 05:55:49.938602] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 [2024-05-13 05:55:49.938632] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:41.723 [2024-05-13 05:55:49.938696] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:41.723 [2024-05-13 05:55:49.938723] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:41.723 [2024-05-13 05:55:49.938745] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:41.723 [2024-05-13 05:55:49.938767] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_fail ...passed[2024-05-13 05:55:49.938791] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:41.723 00:03:41.723 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:41.723 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:41.723 Test: test_nvme_ctrlr_test_active_ns ...[2024-05-13 05:55:49.938895] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:41.723 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-05-13 05:55:49.980941] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-05-13 05:55:49.987464] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-05-13 05:55:49.988570] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 [2024-05-13 05:55:49.988586] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:41.723 passed 00:03:41.723 Test: test_alloc_io_qpair_fail ...passed 00:03:41.723 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:41.723 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:41.723 Test: test_nvme_ctrlr_set_state ...passed 00:03:41.723 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-05-13 05:55:49.989679] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 [2024-05-13 05:55:49.989695] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:41.723 [2024-05-13 05:55:49.989714] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:41.723 [2024-05-13 05:55:49.989722] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-05-13 05:55:49.992001] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_ns_mgmt ...[2024-05-13 05:55:49.997132] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_reset ...[2024-05-13 05:55:49.998257] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_aer_callback ...[2024-05-13 05:55:49.998311] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-05-13 05:55:49.999429] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:41.723 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:41.723 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-05-13 05:55:50.000605] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:41.723 Test: test_nvme_ctrlr_ana_resize ...[2024-05-13 05:55:50.001727] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:41.723 Test: test_nvme_transport_ctrlr_ready ...[2024-05-13 05:55:50.002858] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:41.723 [2024-05-13 05:55:50.002877] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:03:41.723 passed 00:03:41.723 Test: test_nvme_ctrlr_disable ...[2024-05-13 05:55:50.002888] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4136:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:41.723 passed 00:03:41.723 00:03:41.723 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.723 suites 1 1 n/a 0 0 00:03:41.723 tests 43 43 43 0 0 00:03:41.723 asserts 10418 10418 10418 0 n/a 00:03:41.723 00:03:41.723 Elapsed time = 0.039 seconds 00:03:41.723 05:55:50 -- unit/unittest.sh@88 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:41.723 00:03:41.723 00:03:41.723 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.723 http://cunit.sourceforge.net/ 00:03:41.723 00:03:41.723 00:03:41.723 Suite: nvme_ctrlr_cmd 00:03:41.723 Test: test_get_log_pages ...passed 00:03:41.723 Test: test_set_feature_cmd ...passed 00:03:41.723 Test: test_set_feature_ns_cmd ...passed 00:03:41.723 Test: test_get_feature_cmd ...passed 00:03:41.723 Test: test_get_feature_ns_cmd ...passed 00:03:41.723 Test: test_abort_cmd ...passed 00:03:41.723 Test: test_set_host_id_cmds ...[2024-05-13 05:55:50.013642] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:41.723 passed 00:03:41.723 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:41.723 Test: test_io_raw_cmd ...passed 00:03:41.723 Test: test_io_raw_cmd_with_md ...passed 00:03:41.723 Test: test_namespace_attach ...passed 00:03:41.723 Test: test_namespace_detach ...passed 00:03:41.723 Test: test_namespace_create ...passed 00:03:41.723 Test: test_namespace_delete ...passed 00:03:41.723 Test: test_doorbell_buffer_config ...passed 00:03:41.723 Test: test_format_nvme ...passed 00:03:41.723 Test: test_fw_commit ...passed 00:03:41.723 Test: test_fw_image_download ...passed 00:03:41.723 Test: test_sanitize ...passed 00:03:41.723 Test: test_directive ...passed 00:03:41.723 Test: test_nvme_request_add_abort ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:41.723 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:41.723 00:03:41.723 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.723 suites 1 1 n/a 0 0 00:03:41.723 tests 24 24 24 0 0 00:03:41.723 asserts 198 198 198 0 n/a 00:03:41.723 00:03:41.723 Elapsed time = 0.000 seconds 00:03:41.723 05:55:50 -- unit/unittest.sh@89 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:41.723 00:03:41.723 00:03:41.723 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.723 http://cunit.sourceforge.net/ 00:03:41.723 00:03:41.723 00:03:41.723 Suite: nvme_ctrlr_cmd 00:03:41.723 Test: test_geometry_cmd ...passed 00:03:41.723 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:41.723 00:03:41.723 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.723 suites 1 1 n/a 0 0 00:03:41.723 tests 2 2 2 0 0 00:03:41.723 asserts 7 7 7 0 n/a 00:03:41.723 00:03:41.723 Elapsed time = 0.000 seconds 00:03:41.723 05:55:50 -- unit/unittest.sh@90 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:41.723 00:03:41.723 00:03:41.723 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.723 http://cunit.sourceforge.net/ 00:03:41.723 00:03:41.723 00:03:41.723 Suite: nvme 00:03:41.723 Test: test_nvme_ns_construct ...passed 00:03:41.723 Test: test_nvme_ns_uuid ...passed 00:03:41.723 Test: test_nvme_ns_csi ...passed 00:03:41.723 Test: test_nvme_ns_data ...passed 00:03:41.723 Test: test_nvme_ns_set_identify_data ...passed 00:03:41.723 Test: test_spdk_nvme_ns_get_values ...passed 00:03:41.723 Test: test_spdk_nvme_ns_is_active ...passed 00:03:41.723 Test: spdk_nvme_ns_supports ...passed 00:03:41.723 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:41.723 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:41.723 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:41.723 Test: test_nvme_ns_find_id_desc ...passed 00:03:41.723 00:03:41.723 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.723 suites 1 1 n/a 0 0 00:03:41.723 tests 12 12 12 0 0 00:03:41.724 asserts 83 83 83 0 n/a 00:03:41.724 00:03:41.724 Elapsed time = 0.000 seconds 00:03:41.724 05:55:50 -- unit/unittest.sh@91 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:41.985 00:03:41.985 00:03:41.985 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.985 http://cunit.sourceforge.net/ 00:03:41.985 00:03:41.985 00:03:41.985 Suite: nvme_ns_cmd 00:03:41.985 Test: split_test ...passed 00:03:41.985 Test: split_test2 ...passed 00:03:41.985 Test: split_test3 ...passed 00:03:41.985 Test: split_test4 ...passed 00:03:41.985 Test: test_nvme_ns_cmd_flush ...passed 00:03:41.985 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:41.985 Test: test_nvme_ns_cmd_copy ...passed 00:03:41.985 Test: test_io_flags ...[2024-05-13 05:55:50.037210] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:41.985 passed 00:03:41.985 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:41.985 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:41.985 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:41.985 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:41.985 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:41.985 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:41.985 Test: test_cmd_child_request ...passed 00:03:41.985 Test: test_nvme_ns_cmd_readv ...passed 00:03:41.985 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:41.985 Test: test_nvme_ns_cmd_writev ...[2024-05-13 05:55:50.037754] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 288:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:41.985 passed 00:03:41.985 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:41.985 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:41.985 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:41.985 Test: test_nvme_ns_cmd_comparev ...passed 00:03:41.985 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:41.985 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:41.985 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:41.985 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:41.985 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:41.985 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-05-13 05:55:50.037978] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:41.985 passed 00:03:41.985 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-05-13 05:55:50.038023] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:41.985 passed 00:03:41.985 Test: test_nvme_ns_cmd_verify ...passed 00:03:41.985 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:41.985 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:41.985 00:03:41.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.985 suites 1 1 n/a 0 0 00:03:41.985 tests 32 32 32 0 0 00:03:41.985 asserts 550 550 550 0 n/a 00:03:41.985 00:03:41.985 Elapsed time = 0.000 seconds 00:03:41.985 05:55:50 -- unit/unittest.sh@92 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:41.985 00:03:41.985 00:03:41.985 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.985 http://cunit.sourceforge.net/ 00:03:41.985 00:03:41.985 00:03:41.985 Suite: nvme_ns_cmd 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:41.985 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:41.985 00:03:41.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.985 suites 1 1 n/a 0 0 00:03:41.985 tests 12 12 12 0 0 00:03:41.985 asserts 123 123 123 0 n/a 00:03:41.985 00:03:41.985 Elapsed time = 0.000 seconds 00:03:41.985 05:55:50 -- unit/unittest.sh@93 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:41.985 00:03:41.985 00:03:41.985 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.985 http://cunit.sourceforge.net/ 00:03:41.985 00:03:41.985 00:03:41.985 Suite: nvme_qpair 00:03:41.985 Test: test3 ...passed 00:03:41.985 Test: test_ctrlr_failed ...passed 00:03:41.985 Test: struct_packing ...passed 00:03:41.985 Test: test_nvme_qpair_process_completions ...[2024-05-13 05:55:50.057683] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:41.985 [2024-05-13 05:55:50.057962] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:41.985 [2024-05-13 05:55:50.058080] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:41.985 [2024-05-13 05:55:50.058123] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:41.985 passed 00:03:41.985 Test: test_nvme_completion_is_retry ...passed 00:03:41.985 Test: test_get_status_string ...passed 00:03:41.985 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:41.985 Test: test_nvme_qpair_submit_request ...passed 00:03:41.985 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:41.985 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:41.985 Test: test_nvme_qpair_init_deinit ...passed 00:03:41.985 Test: test_nvme_get_sgl_print_info ...passed[2024-05-13 05:55:50.058221] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:41.985 00:03:41.985 00:03:41.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.985 suites 1 1 n/a 0 0 00:03:41.985 tests 12 12 12 0 0 00:03:41.985 asserts 154 154 154 0 n/a 00:03:41.985 00:03:41.985 Elapsed time = 0.000 seconds 00:03:41.985 05:55:50 -- unit/unittest.sh@94 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:41.985 00:03:41.985 00:03:41.985 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.985 http://cunit.sourceforge.net/ 00:03:41.985 00:03:41.985 00:03:41.985 Suite: nvme_pcie 00:03:41.985 Test: test_prp_list_append ...[2024-05-13 05:55:50.066866] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:41.985 [2024-05-13 05:55:50.067208] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:41.985 [2024-05-13 05:55:50.067256] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:41.985 [2024-05-13 05:55:50.067348] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:41.985 passed 00:03:41.985 Test: test_nvme_pcie_hotplug_monitor ...[2024-05-13 05:55:50.067401] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:41.985 passed 00:03:41.985 Test: test_shadow_doorbell_update ...passed 00:03:41.985 Test: test_build_contig_hw_sgl_request ...passed 00:03:41.985 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:03:41.985 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:41.985 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:41.985 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:41.985 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:41.985 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-05-13 05:55:50.067534] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:41.985 passed 00:03:41.985 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:41.985 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-05-13 05:55:50.067585] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:41.985 passed 00:03:41.985 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:41.985 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-05-13 05:55:50.067638] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:41.985 [2024-05-13 05:55:50.067666] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:41.985 passed 00:03:41.985 00:03:41.985 [2024-05-13 05:55:50.067694] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:41.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.985 suites 1 1 n/a 0 0 00:03:41.985 tests 14 14 14 0 0 00:03:41.985 asserts 235 235 235 0 n/a 00:03:41.985 00:03:41.985 Elapsed time = 0.000 seconds 00:03:41.985 05:55:50 -- unit/unittest.sh@95 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:41.985 00:03:41.985 00:03:41.985 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.985 http://cunit.sourceforge.net/ 00:03:41.985 00:03:41.985 00:03:41.985 Suite: nvme_ns_cmd 00:03:41.985 Test: nvme_poll_group_create_test ...passed 00:03:41.985 Test: nvme_poll_group_add_remove_test ...passed 00:03:41.985 Test: nvme_poll_group_process_completions ...passed 00:03:41.985 Test: nvme_poll_group_destroy_test ...passed 00:03:41.985 Test: nvme_poll_group_get_free_stats ...passed 00:03:41.985 00:03:41.985 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.985 suites 1 1 n/a 0 0 00:03:41.985 tests 5 5 5 0 0 00:03:41.985 asserts 75 75 75 0 n/a 00:03:41.985 00:03:41.985 Elapsed time = 0.000 seconds 00:03:41.985 05:55:50 -- unit/unittest.sh@96 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:41.986 00:03:41.986 00:03:41.986 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.986 http://cunit.sourceforge.net/ 00:03:41.986 00:03:41.986 00:03:41.986 Suite: nvme_quirks 00:03:41.986 Test: test_nvme_quirks_striping ...passed 00:03:41.986 00:03:41.986 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.986 suites 1 1 n/a 0 0 00:03:41.986 tests 1 1 1 0 0 00:03:41.986 asserts 5 5 5 0 n/a 00:03:41.986 00:03:41.986 Elapsed time = 0.000 seconds 00:03:41.986 05:55:50 -- unit/unittest.sh@97 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:41.986 00:03:41.986 00:03:41.986 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.986 http://cunit.sourceforge.net/ 00:03:41.986 00:03:41.986 00:03:41.986 Suite: nvme_tcp 00:03:41.986 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:41.986 Test: test_nvme_tcp_build_iovs ...passed 00:03:41.986 Test: test_nvme_tcp_build_sgl_request ...[2024-05-13 05:55:50.089069] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 784:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8211be0f0, and the iovcnt=16, remaining_size=28672 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:41.986 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:41.986 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:41.986 Test: test_nvme_tcp_req_get ...passed 00:03:41.986 Test: test_nvme_tcp_req_init ...passed 00:03:41.986 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:41.986 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:41.986 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-05-13 05:55:50.089641] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bfc60 is same with the state(6) to be set 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:41.986 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:03:41.986 Test: test_nvme_tcp_pdu_ch_handle ...[2024-05-13 05:55:50.089714] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211befb0 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089751] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8211bf558 00:03:41.986 [2024-05-13 05:55:50.089774] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1168:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:41.986 [2024-05-13 05:55:50.089795] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089816] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:41.986 [2024-05-13 05:55:50.089836] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089865] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:41.986 [2024-05-13 05:55:50.089885] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089905] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089925] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089946] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_qpair_connect_sock ...[2024-05-13 05:55:50.089966] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.089986] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bf3e8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.090051] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:41.986 [2024-05-13 05:55:50.090074] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:41.986 Test: test_nvme_tcp_c2h_payload_handle ...[2024-05-13 05:55:50.114497] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:41.986 [2024-05-13 05:55:50.114627] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1283:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8211bf990): PDU Sequence Error 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_icresp_handle ...[2024-05-13 05:55:50.114703] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:41.986 [2024-05-13 05:55:50.114728] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1516:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:41.986 [2024-05-13 05:55:50.114750] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211befb0 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.114772] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:41.986 [2024-05-13 05:55:50.114792] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211befb0 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.114813] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211befb0 is same with the state(0) to be set 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:03:41.986 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-05-13 05:55:50.114849] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1283:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8211bf990): PDU Sequence Error 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed[2024-05-13 05:55:50.114883] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8211be250 00:03:41.986 00:03:41.986 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-05-13 05:55:50.114951] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8211bd9d8, errno=0, rc=0 00:03:41.986 [2024-05-13 05:55:50.114973] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bd9d8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.114995] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8211bd9d8 is same with the state(5) to be set 00:03:41.986 [2024-05-13 05:55:50.115083] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2099:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8211bd9d8 (0): No error: 0 00:03:41.986 [2024-05-13 05:55:50.115109] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2099:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8211bd9d8 (0): No error: 0 00:03:41.986 passed 00:03:41.986 Test: test_nvme_tcp_ctrlr_create_io_qpair ...passed 00:03:41.986 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:41.986 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:03:41.986 Test: test_nvme_tcp_ctrlr_construct ...passed 00:03:41.986 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:41.986 00:03:41.986 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.986 suites 1 1 n/a 0 0 00:03:41.986 tests 27 27 27 0 0 00:03:41.986 asserts 624 624 624 0 n/a 00:03:41.986 00:03:41.986 Elapsed time = 0.055 seconds 00:03:41.986 [2024-05-13 05:55:50.173527] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:41.986 [2024-05-13 05:55:50.173597] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:41.986 [2024-05-13 05:55:50.173635] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.986 [2024-05-13 05:55:50.173640] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.986 [2024-05-13 05:55:50.173667] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2423:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:41.986 [2024-05-13 05:55:50.173672] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:41.986 [2024-05-13 05:55:50.173680] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:41.986 [2024-05-13 05:55:50.173686] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:41.986 [2024-05-13 05:55:50.173695] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2290:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x82cff2180 with addr=192.168.1.78, port=23 00:03:41.986 [2024-05-13 05:55:50.173699] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:41.986 [2024-05-13 05:55:50.173724] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 784:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x82cff2300, and the iovcnt=1, remaining_size=1024 00:03:41.986 [2024-05-13 05:55:50.173729] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:41.986 05:55:50 -- unit/unittest.sh@98 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:41.986 00:03:41.986 00:03:41.986 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.986 http://cunit.sourceforge.net/ 00:03:41.986 00:03:41.986 00:03:41.986 Suite: nvme_transport 00:03:41.986 Test: test_nvme_get_transport ...passed 00:03:41.986 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:41.986 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:41.986 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:41.986 Test: test_ctrlr_get_memory_domains ...passed 00:03:41.986 00:03:41.986 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.986 suites 1 1 n/a 0 0 00:03:41.986 tests 5 5 5 0 0 00:03:41.986 asserts 28 28 28 0 n/a 00:03:41.986 00:03:41.986 Elapsed time = 0.000 seconds 00:03:41.986 05:55:50 -- unit/unittest.sh@99 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:41.986 00:03:41.986 00:03:41.987 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.987 http://cunit.sourceforge.net/ 00:03:41.987 00:03:41.987 00:03:41.987 Suite: nvme_io_msg 00:03:41.987 Test: test_nvme_io_msg_send ...passed 00:03:41.987 Test: test_nvme_io_msg_process ...passed 00:03:41.987 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:41.987 00:03:41.987 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.987 suites 1 1 n/a 0 0 00:03:41.987 tests 3 3 3 0 0 00:03:41.987 asserts 56 56 56 0 n/a 00:03:41.987 00:03:41.987 Elapsed time = 0.000 seconds 00:03:41.987 05:55:50 -- unit/unittest.sh@100 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:41.987 00:03:41.987 00:03:41.987 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.987 http://cunit.sourceforge.net/ 00:03:41.987 00:03:41.987 00:03:41.987 Suite: nvme_pcie_common 00:03:41.987 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-05-13 05:55:50.194611] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:41.987 passed 00:03:41.987 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:41.987 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:41.987 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-05-13 05:55:50.195118] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:41.987 [2024-05-13 05:55:50.195172] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:41.987 passed 00:03:41.987 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-05-13 05:55:50.195197] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:41.987 passed 00:03:41.987 Test: test_nvme_pcie_poll_group_get_stats ...[2024-05-13 05:55:50.195399] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.987 passed 00:03:41.987 00:03:41.987 [2024-05-13 05:55:50.195420] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:41.987 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.987 suites 1 1 n/a 0 0 00:03:41.987 tests 6 6 6 0 0 00:03:41.987 asserts 148 148 148 0 n/a 00:03:41.987 00:03:41.987 Elapsed time = 0.000 seconds 00:03:41.987 05:55:50 -- unit/unittest.sh@101 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:41.987 00:03:41.987 00:03:41.987 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.987 http://cunit.sourceforge.net/ 00:03:41.987 00:03:41.987 00:03:41.987 Suite: nvme_fabric 00:03:41.987 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:41.987 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:41.987 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:41.987 Test: test_nvme_fabric_discover_probe ...passed 00:03:41.987 Test: test_nvme_fabric_qpair_connect ...[2024-05-13 05:55:50.200338] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 605:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:41.987 passed 00:03:41.987 00:03:41.987 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.987 suites 1 1 n/a 0 0 00:03:41.987 tests 5 5 5 0 0 00:03:41.987 asserts 60 60 60 0 n/a 00:03:41.987 00:03:41.987 Elapsed time = 0.000 seconds 00:03:41.987 05:55:50 -- unit/unittest.sh@102 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:41.987 00:03:41.987 00:03:41.987 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.987 http://cunit.sourceforge.net/ 00:03:41.987 00:03:41.987 00:03:41.987 Suite: nvme_opal 00:03:41.987 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:41.987 Test: test_opal_add_short_atom_header ...passed 00:03:41.987 00:03:41.987 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.987 suites 1 1 n/a 0 0 00:03:41.987 tests 2 2 2 0 0 00:03:41.987 asserts 22 22 22 0 n/a 00:03:41.987 00:03:41.987 Elapsed time = 0.000 seconds 00:03:41.987 [2024-05-13 05:55:50.205131] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:41.987 00:03:41.987 real 0m0.427s 00:03:41.987 user 0m0.092s 00:03:41.987 sys 0m0.149s 00:03:41.987 05:55:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.987 05:55:50 -- common/autotest_common.sh@10 -- # set +x 00:03:41.987 ************************************ 00:03:41.987 END TEST unittest_nvme 00:03:41.987 ************************************ 00:03:41.987 05:55:50 -- unit/unittest.sh@247 -- # run_test unittest_log /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:41.987 05:55:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.987 05:55:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.987 05:55:50 -- common/autotest_common.sh@10 -- # set +x 00:03:41.987 ************************************ 00:03:41.987 START TEST unittest_log 00:03:41.987 ************************************ 00:03:41.987 05:55:50 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:41.987 00:03:41.987 00:03:41.987 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.987 http://cunit.sourceforge.net/ 00:03:41.987 00:03:41.987 00:03:41.987 Suite: log 00:03:41.987 Test: log_test ...[2024-05-13 05:55:50.258751] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:03:41.987 [2024-05-13 05:55:50.259121] log_ut.c: 55:log_test: *DEBUG*: log test 00:03:41.987 log dump test: 00:03:41.987 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:41.987 spdk dump test: 00:03:41.987 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:41.987 spdk dump test: 00:03:41.987 passed 00:03:41.987 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:41.987 00000010 65 20 63 68 61 72 73 e chars 00:03:43.369 passed 00:03:43.369 00:03:43.369 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.369 suites 1 1 n/a 0 0 00:03:43.369 tests 2 2 2 0 0 00:03:43.369 asserts 73 73 73 0 n/a 00:03:43.369 00:03:43.369 Elapsed time = 0.000 seconds 00:03:43.369 00:03:43.369 real 0m1.080s 00:03:43.369 user 0m0.000s 00:03:43.369 sys 0m0.008s 00:03:43.369 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.369 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.369 ************************************ 00:03:43.369 END TEST unittest_log 00:03:43.369 ************************************ 00:03:43.369 05:55:51 -- unit/unittest.sh@248 -- # run_test unittest_lvol /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:43.369 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.369 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.369 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.369 ************************************ 00:03:43.369 START TEST unittest_lvol 00:03:43.369 ************************************ 00:03:43.369 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:43.369 00:03:43.369 00:03:43.369 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.369 http://cunit.sourceforge.net/ 00:03:43.369 00:03:43.369 00:03:43.369 Suite: lvol 00:03:43.369 Test: lvs_init_unload_success ...[2024-05-13 05:55:51.397979] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:43.369 passed 00:03:43.369 Test: lvs_init_destroy_success ...[2024-05-13 05:55:51.398422] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:43.369 passed 00:03:43.369 Test: lvs_init_opts_success ...passed 00:03:43.369 Test: lvs_unload_lvs_is_null_fail ...passed 00:03:43.369 Test: lvs_names ...[2024-05-13 05:55:51.398476] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:43.369 [2024-05-13 05:55:51.398506] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:43.369 [2024-05-13 05:55:51.398527] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:43.369 passed 00:03:43.369 Test: lvol_create_destroy_success ...passed 00:03:43.369 Test: lvol_create_fail ...[2024-05-13 05:55:51.398563] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:43.369 [2024-05-13 05:55:51.398660] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:43.369 [2024-05-13 05:55:51.398686] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:43.369 passed 00:03:43.369 Test: lvol_destroy_fail ...passed 00:03:43.369 Test: lvol_close ...[2024-05-13 05:55:51.398739] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:43.369 [2024-05-13 05:55:51.398778] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:43.369 [2024-05-13 05:55:51.398798] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:43.369 passed 00:03:43.369 Test: lvol_resize ...passed 00:03:43.369 Test: lvol_set_read_only ...passed 00:03:43.369 Test: test_lvs_load ...[2024-05-13 05:55:51.398898] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:43.369 passed 00:03:43.369 Test: lvols_load ...[2024-05-13 05:55:51.399035] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:43.369 [2024-05-13 05:55:51.399097] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:43.369 [2024-05-13 05:55:51.399153] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:43.369 passed 00:03:43.369 Test: lvol_open ...passed 00:03:43.369 Test: lvol_snapshot ...passed 00:03:43.369 Test: lvol_snapshot_fail ...passed 00:03:43.369 Test: lvol_clone ...[2024-05-13 05:55:51.399294] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:43.369 passed 00:03:43.369 Test: lvol_clone_fail ...passed 00:03:43.369 Test: lvol_iter_clones ...[2024-05-13 05:55:51.399468] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:43.369 passed 00:03:43.369 Test: lvol_refcnt ...[2024-05-13 05:55:51.399575] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 742a45a0-10ed-11ef-ba60-3508ead7bdda because it is still open 00:03:43.369 passed 00:03:43.369 Test: lvol_names ...[2024-05-13 05:55:51.399619] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:43.369 [2024-05-13 05:55:51.399647] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:43.369 passed 00:03:43.369 Test: lvol_create_thin_provisioned ...[2024-05-13 05:55:51.399684] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:43.369 passed 00:03:43.369 Test: lvol_rename ...[2024-05-13 05:55:51.399766] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:43.369 passed 00:03:43.369 Test: lvs_rename ...[2024-05-13 05:55:51.399821] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:43.369 [2024-05-13 05:55:51.399871] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:43.369 passed 00:03:43.369 Test: lvol_inflate ...[2024-05-13 05:55:51.400018] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:43.370 passed 00:03:43.370 Test: lvol_decouple_parent ...passed 00:03:43.370 Test: lvol_get_xattr ...[2024-05-13 05:55:51.400061] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:43.370 passed 00:03:43.370 Test: lvol_esnap_reload ...passed 00:03:43.370 Test: lvol_esnap_create_bad_args ...[2024-05-13 05:55:51.400138] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:43.370 [2024-05-13 05:55:51.400160] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:43.370 [2024-05-13 05:55:51.400181] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:43.370 [2024-05-13 05:55:51.400207] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:43.370 [2024-05-13 05:55:51.400249] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:43.370 passed 00:03:43.370 Test: lvol_esnap_create_delete ...passed 00:03:43.370 Test: lvol_esnap_load_esnaps ...[2024-05-13 05:55:51.400307] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:43.370 passed 00:03:43.370 Test: lvol_esnap_missing ...[2024-05-13 05:55:51.400449] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:43.370 [2024-05-13 05:55:51.400470] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:43.370 passed 00:03:43.370 Test: lvol_esnap_hotplug ... 00:03:43.370 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:43.370 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:43.370 [2024-05-13 05:55:51.400593] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 742a6d51-10ed-11ef-ba60-3508ead7bdda: failed to create esnap bs_dev: error -12 00:03:43.370 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:43.370 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:43.370 [2024-05-13 05:55:51.400681] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 742a707f-10ed-11ef-ba60-3508ead7bdda: failed to create esnap bs_dev: error -12 00:03:43.370 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:43.370 [2024-05-13 05:55:51.400731] /usr/home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 742a72b9-10ed-11ef-ba60-3508ead7bdda: failed to create esnap bs_dev: error -12 00:03:43.370 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:43.370 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:43.370 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:43.370 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:43.370 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:43.370 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:43.370 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:43.370 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:43.370 passed 00:03:43.370 Test: lvol_get_by ...passed 00:03:43.370 00:03:43.370 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.370 suites 1 1 n/a 0 0 00:03:43.370 tests 34 34 34 0 0 00:03:43.370 asserts 1439 1439 1439 0 n/a 00:03:43.370 00:03:43.370 Elapsed time = 0.000 seconds 00:03:43.370 00:03:43.370 real 0m0.016s 00:03:43.370 user 0m0.007s 00:03:43.370 sys 0m0.008s 00:03:43.370 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.370 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.370 ************************************ 00:03:43.370 END TEST unittest_lvol 00:03:43.370 ************************************ 00:03:43.370 05:55:51 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.370 05:55:51 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:43.370 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.370 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.370 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.370 ************************************ 00:03:43.370 START TEST unittest_nvme_rdma 00:03:43.370 ************************************ 00:03:43.370 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:43.370 00:03:43.370 00:03:43.370 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.370 http://cunit.sourceforge.net/ 00:03:43.370 00:03:43.370 00:03:43.370 Suite: nvme_rdma 00:03:43.370 Test: test_nvme_rdma_build_sgl_request ...[2024-05-13 05:55:51.463492] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:43.370 [2024-05-13 05:55:51.463874] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1629:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:43.370 [2024-05-13 05:55:51.463919] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1685:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:43.370 Test: test_nvme_rdma_build_contig_request ...passed 00:03:43.370 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:43.370 Test: test_nvme_rdma_create_reqs ...[2024-05-13 05:55:51.463969] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1566:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:43.370 [2024-05-13 05:55:51.464004] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_create_rsps ...passed 00:03:43.370 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-05-13 05:55:51.464058] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_poller_create ...[2024-05-13 05:55:51.464096] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1823:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:43.370 [2024-05-13 05:55:51.464117] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1823:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-05-13 05:55:51.464174] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_ctrlr_construct ...passed 00:03:43.370 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:43.370 Test: test_nvme_rdma_req_init ...passed 00:03:43.370 Test: test_nvme_rdma_validate_cm_event ...[2024-05-13 05:55:51.464296] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 620:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:43.370 [2024-05-13 05:55:51.464320] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 620:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_qpair_init ...passed 00:03:43.370 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:43.370 Test: test_nvme_rdma_memory_domain ...passed 00:03:43.370 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:43.370 Test: test_rdma_get_memory_translation ...[2024-05-13 05:55:51.464393] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:03:43.370 [2024-05-13 05:55:51.464441] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:43.370 passed 00:03:43.370 Test: test_get_rdma_qpair_from_wc ...passed 00:03:43.370 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:43.370 Test: test_nvme_rdma_poll_group_get_stats ...[2024-05-13 05:55:51.464469] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:43.370 [2024-05-13 05:55:51.464515] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:43.370 [2024-05-13 05:55:51.464535] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:43.370 passed 00:03:43.370 Test: test_nvme_rdma_qpair_set_poller ...[2024-05-13 05:55:51.464581] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:43.370 [2024-05-13 05:55:51.464602] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:43.370 [2024-05-13 05:55:51.464622] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8208fbe80 on poll group 0x82e7f0000 00:03:43.370 [2024-05-13 05:55:51.464642] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:43.370 [2024-05-13 05:55:51.464661] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:43.370 [2024-05-13 05:55:51.464680] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8208fbe80 on poll group 0x82e7f0000 00:03:43.370 passed 00:03:43.370 00:03:43.370 [2024-05-13 05:55:51.464761] /usr/home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:43.370 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.370 suites 1 1 n/a 0 0 00:03:43.370 tests 22 22 22 0 0 00:03:43.370 asserts 412 412 412 0 n/a 00:03:43.370 00:03:43.370 Elapsed time = 0.000 seconds 00:03:43.370 00:03:43.370 real 0m0.011s 00:03:43.370 user 0m0.010s 00:03:43.370 sys 0m0.001s 00:03:43.370 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.370 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.370 ************************************ 00:03:43.370 END TEST unittest_nvme_rdma 00:03:43.370 ************************************ 00:03:43.370 05:55:51 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:43.370 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.370 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.370 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.370 ************************************ 00:03:43.370 START TEST unittest_nvmf_transport 00:03:43.370 ************************************ 00:03:43.370 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:43.370 00:03:43.370 00:03:43.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.371 http://cunit.sourceforge.net/ 00:03:43.371 00:03:43.371 00:03:43.371 Suite: nvmf 00:03:43.371 Test: test_spdk_nvmf_transport_create ...[2024-05-13 05:55:51.515688] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:43.371 [2024-05-13 05:55:51.516076] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:43.371 [2024-05-13 05:55:51.516127] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 272:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:43.371 [2024-05-13 05:55:51.516188] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 255:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:43.371 passed 00:03:43.371 Test: test_nvmf_transport_poll_group_create ...passed 00:03:43.371 Test: test_spdk_nvmf_transport_opts_init ...[2024-05-13 05:55:51.516258] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:43.371 [2024-05-13 05:55:51.516281] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:43.371 [2024-05-13 05:55:51.516302] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:43.371 passed 00:03:43.371 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:43.371 00:03:43.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.371 suites 1 1 n/a 0 0 00:03:43.371 tests 4 4 4 0 0 00:03:43.371 asserts 49 49 49 0 n/a 00:03:43.371 00:03:43.371 Elapsed time = 0.000 seconds 00:03:43.371 00:03:43.371 real 0m0.009s 00:03:43.371 user 0m0.000s 00:03:43.371 sys 0m0.008s 00:03:43.371 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.371 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 ************************************ 00:03:43.371 END TEST unittest_nvmf_transport 00:03:43.371 ************************************ 00:03:43.371 05:55:51 -- unit/unittest.sh@252 -- # run_test unittest_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:43.371 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.371 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.371 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 ************************************ 00:03:43.371 START TEST unittest_rdma 00:03:43.371 ************************************ 00:03:43.371 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:43.371 00:03:43.371 00:03:43.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.371 http://cunit.sourceforge.net/ 00:03:43.371 00:03:43.371 00:03:43.371 Suite: rdma_common 00:03:43.371 Test: test_spdk_rdma_pd ...[2024-05-13 05:55:51.566070] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:43.371 [2024-05-13 05:55:51.566383] /usr/home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:03:43.371 passed 00:03:43.371 00:03:43.371 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.371 suites 1 1 n/a 0 0 00:03:43.371 tests 1 1 1 0 0 00:03:43.371 asserts 31 31 31 0 n/a 00:03:43.371 00:03:43.371 Elapsed time = 0.000 seconds 00:03:43.371 00:03:43.371 real 0m0.007s 00:03:43.371 user 0m0.000s 00:03:43.371 sys 0m0.008s 00:03:43.371 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.371 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 ************************************ 00:03:43.371 END TEST unittest_rdma 00:03:43.371 ************************************ 00:03:43.371 05:55:51 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.371 05:55:51 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:03:43.371 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.371 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.371 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.371 ************************************ 00:03:43.371 START TEST unittest_nvmf 00:03:43.371 ************************************ 00:03:43.371 05:55:51 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:03:43.371 05:55:51 -- unit/unittest.sh@106 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:43.371 00:03:43.371 00:03:43.371 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.371 http://cunit.sourceforge.net/ 00:03:43.371 00:03:43.371 00:03:43.371 Suite: nvmf 00:03:43.371 Test: test_get_log_page ...[2024-05-13 05:55:51.630258] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:43.371 passed 00:03:43.371 Test: test_process_fabrics_cmd ...passed 00:03:43.371 Test: test_connect ...[2024-05-13 05:55:51.630815] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:43.371 [2024-05-13 05:55:51.630877] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:43.371 [2024-05-13 05:55:51.630903] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:43.371 [2024-05-13 05:55:51.630926] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:43.371 [2024-05-13 05:55:51.630948] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:43.371 [2024-05-13 05:55:51.630971] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 787:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:43.371 [2024-05-13 05:55:51.630992] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 793:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:43.371 [2024-05-13 05:55:51.631013] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:43.371 [2024-05-13 05:55:51.631041] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:43.371 [2024-05-13 05:55:51.631067] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:43.371 [2024-05-13 05:55:51.631109] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:43.371 [2024-05-13 05:55:51.631134] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 600:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:43.371 [2024-05-13 05:55:51.631158] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 607:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:43.371 [2024-05-13 05:55:51.631198] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 624:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:43.371 [2024-05-13 05:55:51.631227] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:03:43.371 passed 00:03:43.371 Test: test_get_ns_id_desc_list ...[2024-05-13 05:55:51.631259] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group 0x0) 00:03:43.371 passed 00:03:43.371 Test: test_identify_ns ...[2024-05-13 05:55:51.631373] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:43.371 [2024-05-13 05:55:51.631459] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:43.371 [2024-05-13 05:55:51.631515] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:43.371 passed 00:03:43.371 Test: test_identify_ns_iocs_specific ...[2024-05-13 05:55:51.631567] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:43.371 [2024-05-13 05:55:51.631665] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:43.371 passed 00:03:43.371 Test: test_reservation_write_exclusive ...passed 00:03:43.371 Test: test_reservation_exclusive_access ...passed 00:03:43.371 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:43.371 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:43.371 Test: test_reservation_notification_log_page ...passed 00:03:43.371 Test: test_get_dif_ctx ...passed 00:03:43.371 Test: test_set_get_features ...[2024-05-13 05:55:51.631864] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:43.371 [2024-05-13 05:55:51.631888] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:43.371 [2024-05-13 05:55:51.631908] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:43.371 [2024-05-13 05:55:51.631927] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:43.371 passed 00:03:43.371 Test: test_identify_ctrlr ...passed 00:03:43.371 Test: test_identify_ctrlr_iocs_specific ...passed 00:03:43.371 Test: test_custom_admin_cmd ...passed 00:03:43.371 Test: test_fused_compare_and_write ...[2024-05-13 05:55:51.632104] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:43.371 passed 00:03:43.371 Test: test_multi_async_event_reqs ...[2024-05-13 05:55:51.632126] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:43.371 [2024-05-13 05:55:51.632147] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:43.371 passed 00:03:43.371 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:43.371 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:03:43.371 Test: test_multi_async_events ...passed 00:03:43.371 Test: test_rae ...passed 00:03:43.371 Test: test_nvmf_ctrlr_create_destruct ...passed 00:03:43.371 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:43.371 Test: test_spdk_nvmf_request_zcopy_start ...[2024-05-13 05:55:51.632305] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:03:43.371 passed 00:03:43.371 Test: test_zcopy_read ...passed 00:03:43.371 Test: test_zcopy_write ...passed 00:03:43.371 Test: test_nvmf_property_set ...passed 00:03:43.371 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-05-13 05:55:51.632374] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:43.371 passed 00:03:43.371 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-05-13 05:55:51.632395] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:43.372 [2024-05-13 05:55:51.632419] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:43.372 passed 00:03:43.372 [2024-05-13 05:55:51.632439] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:43.372 [2024-05-13 05:55:51.632459] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:43.372 00:03:43.372 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.372 suites 1 1 n/a 0 0 00:03:43.372 tests 30 30 30 0 0 00:03:43.372 asserts 885 885 885 0 n/a 00:03:43.372 00:03:43.372 Elapsed time = 0.008 seconds 00:03:43.372 05:55:51 -- unit/unittest.sh@107 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:43.372 00:03:43.372 00:03:43.372 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.372 http://cunit.sourceforge.net/ 00:03:43.372 00:03:43.372 00:03:43.372 Suite: nvmf 00:03:43.372 Test: test_get_rw_params ...passed 00:03:43.372 Test: test_lba_in_range ...passed 00:03:43.372 Test: test_get_dif_ctx ...passed 00:03:43.372 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:43.372 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-05-13 05:55:51.641457] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-05-13 05:55:51.641696] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:43.372 [2024-05-13 05:55:51.641747] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 451:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:43.372 [2024-05-13 05:55:51.641770] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:43.372 [2024-05-13 05:55:51.641789] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 954:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-05-13 05:55:51.641808] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:43.372 [2024-05-13 05:55:51.641823] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 397:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:43.372 [2024-05-13 05:55:51.641839] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:43.372 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:03:43.372 00:03:43.372 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.372 suites 1 1 n/a 0 0 00:03:43.372 tests 9 9 9 0 0 00:03:43.372 asserts 157 157 157 0 n/a 00:03:43.372 00:03:43.372 Elapsed time = 0.000 seconds 00:03:43.372 [2024-05-13 05:55:51.641854] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:43.372 05:55:51 -- unit/unittest.sh@108 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:43.372 00:03:43.372 00:03:43.372 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.372 http://cunit.sourceforge.net/ 00:03:43.372 00:03:43.372 00:03:43.372 Suite: nvmf 00:03:43.372 Test: test_discovery_log ...passed 00:03:43.372 Test: test_discovery_log_with_filters ...passed 00:03:43.372 00:03:43.372 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.372 suites 1 1 n/a 0 0 00:03:43.372 tests 2 2 2 0 0 00:03:43.372 asserts 238 238 238 0 n/a 00:03:43.372 00:03:43.372 Elapsed time = 0.000 seconds 00:03:43.372 05:55:51 -- unit/unittest.sh@109 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:43.372 00:03:43.372 00:03:43.372 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.372 http://cunit.sourceforge.net/ 00:03:43.372 00:03:43.372 00:03:43.372 Suite: nvmf 00:03:43.372 Test: nvmf_test_create_subsystem ...[2024-05-13 05:55:51.658905] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:43.372 [2024-05-13 05:55:51.659267] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:43.372 [2024-05-13 05:55:51.659315] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:43.372 [2024-05-13 05:55:51.659339] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:43.372 [2024-05-13 05:55:51.659360] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:43.372 [2024-05-13 05:55:51.659381] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:43.372 [2024-05-13 05:55:51.659426] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:43.372 [2024-05-13 05:55:51.659487] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:43.372 [2024-05-13 05:55:51.659516] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:43.372 [2024-05-13 05:55:51.659538] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:43.372 [2024-05-13 05:55:51.659559] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:43.372 passed 00:03:43.372 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-05-13 05:55:51.659662] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:43.372 [2024-05-13 05:55:51.659687] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:43.372 passed 00:03:43.372 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:43.372 Test: test_reservation_register ...[2024-05-13 05:55:51.659759] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 [2024-05-13 05:55:51.659792] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:43.372 passed 00:03:43.372 Test: test_reservation_register_with_ptpl ...passed 00:03:43.372 Test: test_reservation_acquire_preempt_1 ...[2024-05-13 05:55:51.660164] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 passed 00:03:43.372 Test: test_reservation_acquire_release_with_ptpl ...passed 00:03:43.372 Test: test_reservation_release ...[2024-05-13 05:55:51.660472] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 passed 00:03:43.372 Test: test_reservation_unregister_notification ...passed 00:03:43.372 Test: test_reservation_release_notification ...[2024-05-13 05:55:51.660515] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 [2024-05-13 05:55:51.660549] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 passed 00:03:43.372 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:43.372 Test: test_reservation_clear_notification ...[2024-05-13 05:55:51.660592] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 passed 00:03:43.372 Test: test_reservation_preempt_notification ...[2024-05-13 05:55:51.660623] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 passed 00:03:43.372 Test: test_spdk_nvmf_ns_event ...[2024-05-13 05:55:51.660656] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2785:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:43.372 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:43.372 Test: test_spdk_nvmf_subsystem_add_host ...[2024-05-13 05:55:51.660805] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 261:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:43.372 [2024-05-13 05:55:51.660844] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_ns_reservation_report ...passed 00:03:43.372 Test: test_nvmf_nqn_is_valid ...[2024-05-13 05:55:51.660879] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3147:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:43.372 [2024-05-13 05:55:51.660928] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:43.372 [2024-05-13 05:55:51.660950] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:74522651-10ed-11ef-ba60-3508ead7bdd": uuid is not the correct length 00:03:43.372 [2024-05-13 05:55:51.660972] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_ns_reservation_restore ...[2024-05-13 05:55:51.661031] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:43.372 passed 00:03:43.372 Test: test_nvmf_subsystem_state_change ...passed 00:03:43.372 Test: test_nvmf_reservation_custom_ops ...passed 00:03:43.372 00:03:43.372 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.372 suites 1 1 n/a 0 0 00:03:43.372 tests 22 22 22 0 0 00:03:43.372 asserts 405 405 405 0 n/a 00:03:43.372 00:03:43.372 Elapsed time = 0.008 seconds 00:03:43.372 05:55:51 -- unit/unittest.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:43.372 00:03:43.372 00:03:43.372 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.372 http://cunit.sourceforge.net/ 00:03:43.372 00:03:43.373 00:03:43.373 Suite: nvmf 00:03:43.373 Test: test_nvmf_tcp_create ...[2024-05-13 05:55:51.677492] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:43.373 passed 00:03:43.634 Test: test_nvmf_tcp_destroy ...passed 00:03:43.634 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:43.634 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:43.634 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:43.634 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:43.634 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:43.634 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:03:43.634 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:43.634 Test: test_nvmf_tcp_icreq_handle ...passed 00:03:43.634 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:43.634 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:43.634 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-05-13 05:55:51.689981] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690003] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690014] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690023] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690031] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690050] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:43.634 [2024-05-13 05:55:51.690058] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690066] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb3e8 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690074] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:43.634 [2024-05-13 05:55:51.690082] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb3e8 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690090] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690097] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb3e8 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690106] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690113] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb3e8 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690134] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2485:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:43.634 [2024-05-13 05:55:51.690142] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690150] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb3e8 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690160] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x8205cac60 00:03:43.634 [2024-05-13 05:55:51.690168] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690175] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690185] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x8205cb4d0 00:03:43.634 [2024-05-13 05:55:51.690192] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690200] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690208] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:43.634 [2024-05-13 05:55:51.690215] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.634 [2024-05-13 05:55:51.690223] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.634 [2024-05-13 05:55:51.690231] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:43.634 [2024-05-13 05:55:51.690239] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690247] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 passed 00:03:43.635 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-05-13 05:55:51.690260] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690268] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 [2024-05-13 05:55:51.690276] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690284] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 [2024-05-13 05:55:51.690292] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690300] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 [2024-05-13 05:55:51.690308] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690326] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 [2024-05-13 05:55:51.690335] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690342] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 [2024-05-13 05:55:51.690351] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:43.635 [2024-05-13 05:55:51.690358] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1575:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8205cb4d0 is same with the state(5) to be set 00:03:43.635 passed 00:03:43.635 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:03:43.635 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-05-13 05:55:51.695919] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:43.635 [2024-05-13 05:55:51.695941] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:43.635 passed 00:03:43.635 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-05-13 05:55:51.696065] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:43.635 [2024-05-13 05:55:51.696078] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:43.635 [2024-05-13 05:55:51.696144] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:43.635 passed 00:03:43.635 00:03:43.635 [2024-05-13 05:55:51.696154] /usr/home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:43.635 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.635 suites 1 1 n/a 0 0 00:03:43.635 tests 17 17 17 0 0 00:03:43.635 asserts 222 222 222 0 n/a 00:03:43.635 00:03:43.635 Elapsed time = 0.016 seconds 00:03:43.635 05:55:51 -- unit/unittest.sh@111 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:43.635 00:03:43.635 00:03:43.635 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.635 http://cunit.sourceforge.net/ 00:03:43.635 00:03:43.635 00:03:43.635 Suite: nvmf 00:03:43.635 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:43.635 00:03:43.635 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.635 suites 1 1 n/a 0 0 00:03:43.635 tests 1 1 1 0 0 00:03:43.635 asserts 17 17 17 0 n/a 00:03:43.635 00:03:43.635 Elapsed time = 0.000 seconds 00:03:43.635 00:03:43.635 real 0m0.090s 00:03:43.635 user 0m0.053s 00:03:43.635 sys 0m0.031s 00:03:43.635 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.635 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.635 ************************************ 00:03:43.635 END TEST unittest_nvmf 00:03:43.635 ************************************ 00:03:43.635 05:55:51 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.635 05:55:51 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.635 05:55:51 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:43.635 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.635 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.635 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.635 ************************************ 00:03:43.635 START TEST unittest_nvmf_rdma 00:03:43.635 ************************************ 00:03:43.635 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:43.635 00:03:43.635 00:03:43.635 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.635 http://cunit.sourceforge.net/ 00:03:43.635 00:03:43.635 00:03:43.635 Suite: nvmf 00:03:43.635 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-05-13 05:55:51.761291] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1917:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:43.635 [2024-05-13 05:55:51.761584] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1967:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:43.635 [2024-05-13 05:55:51.761623] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1967:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:43.635 passed 00:03:43.635 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:43.635 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:43.635 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:03:43.635 Test: test_nvmf_rdma_opts_init ...passed 00:03:43.635 Test: test_nvmf_rdma_request_free_data ...passed 00:03:43.635 Test: test_nvmf_rdma_update_ibv_state ...[2024-05-13 05:55:51.761859] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:03:43.635 passed 00:03:43.635 Test: test_nvmf_rdma_resources_create ...[2024-05-13 05:55:51.761883] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:03:43.635 passed 00:03:43.635 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:43.635 Test: test_nvmf_rdma_resize_cq ...[2024-05-13 05:55:51.762764] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1007:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:43.635 Using CQ of insufficient size may lead to CQ overrun 00:03:43.635 [2024-05-13 05:55:51.762785] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1012:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:43.635 passed 00:03:43.635 00:03:43.635 [2024-05-13 05:55:51.762844] /usr/home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:43.635 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.635 suites 1 1 n/a 0 0 00:03:43.635 tests 10 10 10 0 0 00:03:43.635 asserts 584 584 584 0 n/a 00:03:43.635 00:03:43.635 Elapsed time = 0.000 seconds 00:03:43.635 00:03:43.635 real 0m0.008s 00:03:43.635 user 0m0.006s 00:03:43.635 sys 0m0.005s 00:03:43.635 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.635 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.635 ************************************ 00:03:43.635 END TEST unittest_nvmf_rdma 00:03:43.635 ************************************ 00:03:43.635 05:55:51 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.635 05:55:51 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:03:43.635 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.635 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.635 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.635 ************************************ 00:03:43.635 START TEST unittest_scsi 00:03:43.635 ************************************ 00:03:43.635 05:55:51 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:03:43.635 05:55:51 -- unit/unittest.sh@115 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:43.635 00:03:43.635 00:03:43.635 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.635 http://cunit.sourceforge.net/ 00:03:43.635 00:03:43.635 00:03:43.635 Suite: dev_suite 00:03:43.635 Test: dev_destruct_null_dev ...passed 00:03:43.635 Test: dev_destruct_zero_luns ...passed 00:03:43.635 Test: dev_destruct_null_lun ...passed 00:03:43.635 Test: dev_destruct_success ...passed 00:03:43.635 Test: dev_construct_num_luns_zero ...passed 00:03:43.635 Test: dev_construct_no_lun_zero ...passed 00:03:43.635 Test: dev_construct_null_lun ...passed 00:03:43.635 Test: dev_construct_name_too_long ...[2024-05-13 05:55:51.807871] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:43.635 [2024-05-13 05:55:51.808082] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:43.635 [2024-05-13 05:55:51.808100] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:43.635 [2024-05-13 05:55:51.808115] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:43.635 passed 00:03:43.635 Test: dev_construct_success ...passed 00:03:43.635 Test: dev_construct_success_lun_zero_not_first ...passed 00:03:43.635 Test: dev_queue_mgmt_task_success ...passed 00:03:43.635 Test: dev_queue_task_success ...passed 00:03:43.635 Test: dev_stop_success ...passed 00:03:43.635 Test: dev_add_port_max_ports ...passed[2024-05-13 05:55:51.808187] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:43.635 00:03:43.635 Test: dev_add_port_construct_failure1 ...passed 00:03:43.635 Test: dev_add_port_construct_failure2 ...passed 00:03:43.636 Test: dev_add_port_success1 ...passed 00:03:43.636 Test: dev_add_port_success2 ...passed 00:03:43.636 Test: dev_add_port_success3 ...passed 00:03:43.636 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:43.636 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:43.636 Test: dev_find_port_by_id_success ...passed 00:03:43.636 Test: dev_add_lun_bdev_not_found ...passed 00:03:43.636 Test: dev_add_lun_no_free_lun_id ...[2024-05-13 05:55:51.808218] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:43.636 [2024-05-13 05:55:51.808232] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:43.636 passed 00:03:43.636 Test: dev_add_lun_success1 ...passed 00:03:43.636 Test: dev_add_lun_success2 ...passed 00:03:43.636 Test: dev_check_pending_tasks ...passed 00:03:43.636 Test: dev_iterate_luns ...passed 00:03:43.636 Test: dev_find_free_lun ...[2024-05-13 05:55:51.808455] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:43.636 passed 00:03:43.636 00:03:43.636 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.636 suites 1 1 n/a 0 0 00:03:43.636 tests 29 29 29 0 0 00:03:43.636 asserts 97 97 97 0 n/a 00:03:43.636 00:03:43.636 Elapsed time = 0.000 seconds 00:03:43.636 05:55:51 -- unit/unittest.sh@116 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:43.636 00:03:43.636 00:03:43.636 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.636 http://cunit.sourceforge.net/ 00:03:43.636 00:03:43.636 00:03:43.636 Suite: lun_suite 00:03:43.636 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-05-13 05:55:51.817746] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:43.636 passed 00:03:43.636 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-05-13 05:55:51.818210] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:43.636 passed 00:03:43.636 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:43.636 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:43.636 Test: lun_task_mgmt_execute_invalid_case ...passed 00:03:43.636 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:43.636 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:43.636 Test: lun_append_task_null_lun_not_supported ...passed 00:03:43.636 Test: lun_execute_scsi_task_pending ...passed 00:03:43.636 Test: lun_execute_scsi_task_complete ...passed 00:03:43.636 Test: lun_execute_scsi_task_resize ...passed 00:03:43.636 Test: lun_destruct_success ...[2024-05-13 05:55:51.818327] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:43.636 passed 00:03:43.636 Test: lun_construct_null_ctx ...passed 00:03:43.636 Test: lun_construct_success ...[2024-05-13 05:55:51.818437] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:43.636 passed 00:03:43.636 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:43.636 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:43.636 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:43.636 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:03:43.636 00:03:43.636 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.636 suites 1 1 n/a 0 0 00:03:43.636 tests 18 18 18 0 0 00:03:43.636 asserts 153 153 153 0 n/a 00:03:43.636 00:03:43.636 Elapsed time = 0.000 seconds 00:03:43.636 05:55:51 -- unit/unittest.sh@117 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:43.636 00:03:43.636 00:03:43.636 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.636 http://cunit.sourceforge.net/ 00:03:43.636 00:03:43.636 00:03:43.636 Suite: scsi_suite 00:03:43.636 Test: scsi_init ...passed 00:03:43.636 00:03:43.636 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.636 suites 1 1 n/a 0 0 00:03:43.636 tests 1 1 1 0 0 00:03:43.636 asserts 1 1 1 0 n/a 00:03:43.636 00:03:43.636 Elapsed time = 0.000 seconds 00:03:43.636 05:55:51 -- unit/unittest.sh@118 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:43.636 00:03:43.636 00:03:43.636 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.636 http://cunit.sourceforge.net/ 00:03:43.636 00:03:43.636 00:03:43.636 Suite: translation_suite 00:03:43.636 Test: mode_select_6_test ...passed 00:03:43.636 Test: mode_select_6_test2 ...passed 00:03:43.636 Test: mode_sense_6_test ...passed 00:03:43.636 Test: mode_sense_10_test ...passed 00:03:43.636 Test: inquiry_evpd_test ...passed 00:03:43.636 Test: inquiry_standard_test ...passed 00:03:43.636 Test: inquiry_overflow_test ...passed 00:03:43.636 Test: task_complete_test ...passed 00:03:43.636 Test: lba_range_test ...passed 00:03:43.636 Test: xfer_len_test ...[2024-05-13 05:55:51.836345] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:43.636 passed 00:03:43.636 Test: xfer_test ...passed 00:03:43.636 Test: scsi_name_padding_test ...passed 00:03:43.636 Test: get_dif_ctx_test ...passed 00:03:43.636 Test: unmap_split_test ...passed 00:03:43.636 00:03:43.636 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.636 suites 1 1 n/a 0 0 00:03:43.636 tests 14 14 14 0 0 00:03:43.636 asserts 1200 1200 1200 0 n/a 00:03:43.636 00:03:43.636 Elapsed time = 0.000 seconds 00:03:43.636 05:55:51 -- unit/unittest.sh@119 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:43.636 00:03:43.636 00:03:43.636 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.636 http://cunit.sourceforge.net/ 00:03:43.636 00:03:43.636 00:03:43.636 Suite: reservation_suite 00:03:43.636 Test: test_reservation_register ...[2024-05-13 05:55:51.841803] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.636 passed 00:03:43.636 Test: test_reservation_reserve ...passed 00:03:43.636 Test: test_reservation_preempt_non_all_regs ...[2024-05-13 05:55:51.841965] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.636 [2024-05-13 05:55:51.841982] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:43.636 [2024-05-13 05:55:51.841994] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:43.636 [2024-05-13 05:55:51.842011] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.636 passed 00:03:43.636 Test: test_reservation_preempt_all_regs ...passed 00:03:43.636 Test: test_reservation_cmds_conflict ...[2024-05-13 05:55:51.842023] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:43.636 [2024-05-13 05:55:51.842041] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.636 [2024-05-13 05:55:51.842059] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.636 [2024-05-13 05:55:51.842071] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:43.636 [2024-05-13 05:55:51.842083] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:43.636 [2024-05-13 05:55:51.842093] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:43.636 [2024-05-13 05:55:51.842103] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:43.636 passed 00:03:43.636 Test: test_scsi2_reserve_release ...passed 00:03:43.636 Test: test_pr_with_scsi2_reserve_release ...passed 00:03:43.636 00:03:43.636 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.636 suites 1 1 n/a 0 0 00:03:43.636 tests 7 7 7 0 0 00:03:43.636 asserts 257 257 257 0 n/a 00:03:43.636 00:03:43.636 Elapsed time = 0.000 seconds 00:03:43.636 [2024-05-13 05:55:51.842113] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 846:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:43.636 [2024-05-13 05:55:51.842134] /usr/home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 273:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:43.636 00:03:43.636 real 0m0.039s 00:03:43.636 user 0m0.014s 00:03:43.636 sys 0m0.026s 00:03:43.636 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.636 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.636 ************************************ 00:03:43.636 END TEST unittest_scsi 00:03:43.636 ************************************ 00:03:43.636 05:55:51 -- unit/unittest.sh@276 -- # uname -s 00:03:43.636 05:55:51 -- unit/unittest.sh@276 -- # '[' FreeBSD = Linux ']' 00:03:43.636 05:55:51 -- unit/unittest.sh@279 -- # run_test unittest_thread /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:43.636 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.636 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.636 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.636 ************************************ 00:03:43.636 START TEST unittest_thread 00:03:43.636 ************************************ 00:03:43.636 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:43.636 00:03:43.636 00:03:43.636 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.636 http://cunit.sourceforge.net/ 00:03:43.636 00:03:43.636 00:03:43.636 Suite: io_channel 00:03:43.636 Test: thread_alloc ...passed 00:03:43.636 Test: thread_send_msg ...passed 00:03:43.636 Test: thread_poller ...passed 00:03:43.636 Test: poller_pause ...passed 00:03:43.636 Test: thread_for_each ...passed 00:03:43.636 Test: for_each_channel_remove ...passed 00:03:43.636 Test: for_each_channel_unreg ...[2024-05-13 05:55:51.896238] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2164:spdk_io_device_register: *ERROR*: io_device 0x820ea09c4 already registered (old:0x82c6aa000 new:0x82c6aa180) 00:03:43.636 passed 00:03:43.636 Test: thread_name ...passed 00:03:43.637 Test: channel ...[2024-05-13 05:55:51.897026] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x226918 00:03:43.637 passed 00:03:43.637 Test: channel_destroy_races ...passed 00:03:43.637 Test: thread_exit_test ...[2024-05-13 05:55:51.897822] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 630:thread_exit: *ERROR*: thread 0x82c66fa80 got timeout, and move it to the exited state forcefully 00:03:43.637 passed 00:03:43.637 Test: thread_update_stats_test ...passed 00:03:43.637 Test: nested_channel ...passed 00:03:43.637 Test: device_unregister_and_thread_exit_race ...passed 00:03:43.637 Test: cache_closest_timed_poller ...passed 00:03:43.637 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:43.637 Test: io_device_lookup ...passed 00:03:43.637 Test: spdk_spin ...[2024-05-13 05:55:51.899543] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:43.637 [2024-05-13 05:55:51.899565] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820ea09c0 00:03:43.637 [2024-05-13 05:55:51.899628] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:43.637 [2024-05-13 05:55:51.899819] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:43.637 [2024-05-13 05:55:51.899845] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820ea09c0 00:03:43.637 [2024-05-13 05:55:51.899857] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:43.637 [2024-05-13 05:55:51.899869] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820ea09c0 00:03:43.637 [2024-05-13 05:55:51.899881] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:43.637 [2024-05-13 05:55:51.899892] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820ea09c0 00:03:43.637 [2024-05-13 05:55:51.899905] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:43.637 [2024-05-13 05:55:51.899916] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x820ea09c0 00:03:43.637 passed 00:03:43.637 Test: for_each_channel_and_thread_exit_race ...passed 00:03:43.637 Test: for_each_thread_and_thread_exit_race ...passed 00:03:43.637 00:03:43.637 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.637 suites 1 1 n/a 0 0 00:03:43.637 tests 20 20 20 0 0 00:03:43.637 asserts 409 409 409 0 n/a 00:03:43.637 00:03:43.637 Elapsed time = 0.008 seconds 00:03:43.637 00:03:43.637 real 0m0.014s 00:03:43.637 user 0m0.013s 00:03:43.637 sys 0m0.004s 00:03:43.637 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.637 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.637 ************************************ 00:03:43.637 END TEST unittest_thread 00:03:43.637 ************************************ 00:03:43.900 05:55:51 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:43.900 05:55:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.900 05:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.900 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.900 ************************************ 00:03:43.900 START TEST unittest_iobuf 00:03:43.900 ************************************ 00:03:43.900 05:55:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:43.900 00:03:43.900 00:03:43.900 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.900 http://cunit.sourceforge.net/ 00:03:43.900 00:03:43.900 00:03:43.900 Suite: io_channel 00:03:43.900 Test: iobuf ...passed 00:03:43.900 Test: iobuf_cache ...[2024-05-13 05:55:51.959726] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 304:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:43.900 [2024-05-13 05:55:51.960011] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 306:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:43.900 [2024-05-13 05:55:51.960057] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 316:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:43.900 [2024-05-13 05:55:51.960075] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 318:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:43.900 [2024-05-13 05:55:51.960097] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 304:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:43.900 [2024-05-13 05:55:51.960113] /usr/home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 306:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:43.900 passed 00:03:43.900 00:03:43.900 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.900 suites 1 1 n/a 0 0 00:03:43.900 tests 2 2 2 0 0 00:03:43.900 asserts 107 107 107 0 n/a 00:03:43.900 00:03:43.900 Elapsed time = 0.000 seconds 00:03:43.900 00:03:43.900 real 0m0.008s 00:03:43.900 user 0m0.000s 00:03:43.900 sys 0m0.008s 00:03:43.900 05:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.900 05:55:51 -- common/autotest_common.sh@10 -- # set +x 00:03:43.900 ************************************ 00:03:43.900 END TEST unittest_iobuf 00:03:43.900 ************************************ 00:03:43.900 05:55:52 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:03:43.900 05:55:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.900 05:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.900 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:43.900 ************************************ 00:03:43.900 START TEST unittest_util 00:03:43.900 ************************************ 00:03:43.900 05:55:52 -- common/autotest_common.sh@1104 -- # unittest_util 00:03:43.900 05:55:52 -- unit/unittest.sh@132 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:43.900 00:03:43.900 00:03:43.900 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.900 http://cunit.sourceforge.net/ 00:03:43.900 00:03:43.900 00:03:43.900 Suite: base64 00:03:43.900 Test: test_base64_get_encoded_strlen ...passed 00:03:43.900 Test: test_base64_get_decoded_len ...passed 00:03:43.900 Test: test_base64_encode ...passed 00:03:43.900 Test: test_base64_decode ...passed 00:03:43.900 Test: test_base64_urlsafe_encode ...passed 00:03:43.900 Test: test_base64_urlsafe_decode ...passed 00:03:43.900 00:03:43.900 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.900 suites 1 1 n/a 0 0 00:03:43.900 tests 6 6 6 0 0 00:03:43.900 asserts 112 112 112 0 n/a 00:03:43.900 00:03:43.900 Elapsed time = 0.000 seconds 00:03:43.900 05:55:52 -- unit/unittest.sh@133 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:43.900 00:03:43.900 00:03:43.900 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.900 http://cunit.sourceforge.net/ 00:03:43.900 00:03:43.900 00:03:43.900 Suite: bit_array 00:03:43.900 Test: test_1bit ...passed 00:03:43.900 Test: test_64bit ...passed 00:03:43.900 Test: test_find ...passed 00:03:43.900 Test: test_resize ...passed 00:03:43.900 Test: test_errors ...passed 00:03:43.900 Test: test_count ...passed 00:03:43.900 Test: test_mask_store_load ...passed 00:03:43.900 Test: test_mask_clear ...passed 00:03:43.900 00:03:43.900 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.900 suites 1 1 n/a 0 0 00:03:43.900 tests 8 8 8 0 0 00:03:43.900 asserts 5075 5075 5075 0 n/a 00:03:43.900 00:03:43.900 Elapsed time = 0.000 seconds 00:03:43.900 05:55:52 -- unit/unittest.sh@134 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:43.900 00:03:43.900 00:03:43.900 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.900 http://cunit.sourceforge.net/ 00:03:43.900 00:03:43.900 00:03:43.900 Suite: cpuset 00:03:43.900 Test: test_cpuset ...passed 00:03:43.900 Test: test_cpuset_parse ...[2024-05-13 05:55:52.039042] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:43.900 [2024-05-13 05:55:52.039404] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:43.900 [2024-05-13 05:55:52.039449] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:43.900 [2024-05-13 05:55:52.039472] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:43.900 [2024-05-13 05:55:52.039494] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:43.900 [2024-05-13 05:55:52.039514] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:43.900 [2024-05-13 05:55:52.039536] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:43.900 [2024-05-13 05:55:52.039557] /usr/home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:43.900 passed 00:03:43.900 Test: test_cpuset_fmt ...passed 00:03:43.900 00:03:43.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.901 suites 1 1 n/a 0 0 00:03:43.901 tests 3 3 3 0 0 00:03:43.901 asserts 65 65 65 0 n/a 00:03:43.901 00:03:43.901 Elapsed time = 0.000 seconds 00:03:43.901 05:55:52 -- unit/unittest.sh@135 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:43.901 00:03:43.901 00:03:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.901 http://cunit.sourceforge.net/ 00:03:43.901 00:03:43.901 00:03:43.901 Suite: crc16 00:03:43.901 Test: test_crc16_t10dif ...passed 00:03:43.901 Test: test_crc16_t10dif_seed ...passed 00:03:43.901 Test: test_crc16_t10dif_copy ...passed 00:03:43.901 00:03:43.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.901 suites 1 1 n/a 0 0 00:03:43.901 tests 3 3 3 0 0 00:03:43.901 asserts 5 5 5 0 n/a 00:03:43.901 00:03:43.901 Elapsed time = 0.000 seconds 00:03:43.901 05:55:52 -- unit/unittest.sh@136 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:43.901 00:03:43.901 00:03:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.901 http://cunit.sourceforge.net/ 00:03:43.901 00:03:43.901 00:03:43.901 Suite: crc32_ieee 00:03:43.901 Test: test_crc32_ieee ...passed 00:03:43.901 00:03:43.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.901 suites 1 1 n/a 0 0 00:03:43.901 tests 1 1 1 0 0 00:03:43.901 asserts 1 1 1 0 n/a 00:03:43.901 00:03:43.901 Elapsed time = 0.000 seconds 00:03:43.901 05:55:52 -- unit/unittest.sh@137 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:43.901 00:03:43.901 00:03:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.901 http://cunit.sourceforge.net/ 00:03:43.901 00:03:43.901 00:03:43.901 Suite: crc32c 00:03:43.901 Test: test_crc32c ...passed 00:03:43.901 Test: test_crc32c_nvme ...passed 00:03:43.901 00:03:43.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.901 suites 1 1 n/a 0 0 00:03:43.901 tests 2 2 2 0 0 00:03:43.901 asserts 16 16 16 0 n/a 00:03:43.901 00:03:43.901 Elapsed time = 0.000 seconds 00:03:43.901 05:55:52 -- unit/unittest.sh@138 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:43.901 00:03:43.901 00:03:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.901 http://cunit.sourceforge.net/ 00:03:43.901 00:03:43.901 00:03:43.901 Suite: crc64 00:03:43.901 Test: test_crc64_nvme ...passed 00:03:43.901 00:03:43.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.901 suites 1 1 n/a 0 0 00:03:43.901 tests 1 1 1 0 0 00:03:43.901 asserts 4 4 4 0 n/a 00:03:43.901 00:03:43.901 Elapsed time = 0.000 seconds 00:03:43.901 05:55:52 -- unit/unittest.sh@139 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:43.901 00:03:43.901 00:03:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.901 http://cunit.sourceforge.net/ 00:03:43.901 00:03:43.901 00:03:43.901 Suite: string 00:03:43.901 Test: test_parse_ip_addr ...passed 00:03:43.901 Test: test_str_chomp ...passed 00:03:43.901 Test: test_parse_capacity ...passed 00:03:43.901 Test: test_sprintf_append_realloc ...passed 00:03:43.901 Test: test_strtol ...passed 00:03:43.901 Test: test_strtoll ...passed 00:03:43.901 Test: test_strarray ...passed 00:03:43.901 Test: test_strcpy_replace ...passed 00:03:43.901 00:03:43.901 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.901 suites 1 1 n/a 0 0 00:03:43.901 tests 8 8 8 0 0 00:03:43.901 asserts 161 161 161 0 n/a 00:03:43.901 00:03:43.901 Elapsed time = 0.000 seconds 00:03:43.901 05:55:52 -- unit/unittest.sh@140 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:43.901 00:03:43.901 00:03:43.901 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.901 http://cunit.sourceforge.net/ 00:03:43.901 00:03:43.901 00:03:43.901 Suite: dif 00:03:43.901 Test: dif_generate_and_verify_test ...[2024-05-13 05:55:52.087857] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:43.901 [2024-05-13 05:55:52.088378] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:43.901 [2024-05-13 05:55:52.088503] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:43.901 [2024-05-13 05:55:52.088622] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:43.901 [2024-05-13 05:55:52.088730] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:43.901 [2024-05-13 05:55:52.088847] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:43.901 passed 00:03:43.901 Test: dif_disable_check_test ...[2024-05-13 05:55:52.089234] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:43.901 [2024-05-13 05:55:52.089359] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:43.901 [2024-05-13 05:55:52.089472] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:43.901 passed 00:03:43.901 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-05-13 05:55:52.089842] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:43.901 [2024-05-13 05:55:52.089962] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:43.901 [2024-05-13 05:55:52.090098] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:43.901 [2024-05-13 05:55:52.090218] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:43.901 [2024-05-13 05:55:52.090352] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.901 [2024-05-13 05:55:52.090494] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.901 [2024-05-13 05:55:52.090606] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.901 [2024-05-13 05:55:52.090713] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:43.901 [2024-05-13 05:55:52.090821] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:43.901 [2024-05-13 05:55:52.090928] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:43.901 [2024-05-13 05:55:52.091067] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:43.901 passed 00:03:43.901 Test: dif_apptag_mask_test ...[2024-05-13 05:55:52.091195] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:43.901 [2024-05-13 05:55:52.091292] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:43.901 passed 00:03:43.901 Test: dif_sec_512_md_0_error_test ...[2024-05-13 05:55:52.091368] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.901 passed 00:03:43.901 Test: dif_sec_4096_md_0_error_test ...[2024-05-13 05:55:52.091394] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.901 [2024-05-13 05:55:52.091414] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.901 passed 00:03:43.901 Test: dif_sec_4100_md_128_error_test ...[2024-05-13 05:55:52.091437] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:43.901 [2024-05-13 05:55:52.091456] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:43.901 passed 00:03:43.901 Test: dif_guard_seed_test ...passed 00:03:43.901 Test: dif_guard_value_test ...passed 00:03:43.901 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:43.901 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.901 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-13 05:55:52.101118] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff4c, Actual=fd4c 00:03:43.901 [2024-05-13 05:55:52.101616] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fc21, Actual=fe21 00:03:43.901 [2024-05-13 05:55:52.102103] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.102602] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.103090] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=200005c 00:03:43.902 [2024-05-13 05:55:52.103575] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=200005c 00:03:43.902 [2024-05-13 05:55:52.104059] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=25a7 00:03:43.902 [2024-05-13 05:55:52.104421] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe21, Actual=af58 00:03:43.902 [2024-05-13 05:55:52.104781] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=18b753ed, Actual=1ab753ed 00:03:43.902 [2024-05-13 05:55:52.105188] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=3a574660, Actual=38574660 00:03:43.902 [2024-05-13 05:55:52.105504] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.105821] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.106136] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=20000000000005c 00:03:43.902 [2024-05-13 05:55:52.106461] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=20000000000005c 00:03:43.902 [2024-05-13 05:55:52.106780] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.902 [2024-05-13 05:55:52.107016] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574660, Actual=fcb74c3d 00:03:43.902 [2024-05-13 05:55:52.107252] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.902 [2024-05-13 05:55:52.107568] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:43.902 [2024-05-13 05:55:52.107892] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.108208] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.108532] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=25c 00:03:43.902 [2024-05-13 05:55:52.108848] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=25c 00:03:43.902 [2024-05-13 05:55:52.109164] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.902 [2024-05-13 05:55:52.109400] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4837a266, Actual=f6a564e94e73984d 00:03:43.902 passed 00:03:43.902 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-05-13 05:55:52.109506] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.902 [2024-05-13 05:55:52.109549] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:43.902 [2024-05-13 05:55:52.109591] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.109632] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.109674] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.902 [2024-05-13 05:55:52.109718] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.902 [2024-05-13 05:55:52.109759] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.902 [2024-05-13 05:55:52.109794] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=af58 00:03:43.902 [2024-05-13 05:55:52.109829] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.902 [2024-05-13 05:55:52.109870] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:43.902 [2024-05-13 05:55:52.109911] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.109953] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.109995] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.902 [2024-05-13 05:55:52.110037] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.902 [2024-05-13 05:55:52.110078] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.902 [2024-05-13 05:55:52.110113] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fcb74c3d 00:03:43.902 [2024-05-13 05:55:52.110147] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.902 [2024-05-13 05:55:52.110188] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:43.902 [2024-05-13 05:55:52.110229] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.110270] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.110320] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.902 [2024-05-13 05:55:52.110362] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.902 [2024-05-13 05:55:52.110423] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.902 [2024-05-13 05:55:52.110459] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f6a564e94e73984d 00:03:43.902 passed 00:03:43.902 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-05-13 05:55:52.110497] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.902 [2024-05-13 05:55:52.110537] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:43.902 [2024-05-13 05:55:52.110579] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.110620] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.110661] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.902 [2024-05-13 05:55:52.110702] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.902 [2024-05-13 05:55:52.110744] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.902 [2024-05-13 05:55:52.110778] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=af58 00:03:43.902 [2024-05-13 05:55:52.110812] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.902 [2024-05-13 05:55:52.110853] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:43.902 [2024-05-13 05:55:52.110894] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.110936] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.110977] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.902 [2024-05-13 05:55:52.111018] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.902 [2024-05-13 05:55:52.111059] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.902 [2024-05-13 05:55:52.111094] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fcb74c3d 00:03:43.902 [2024-05-13 05:55:52.111128] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.902 [2024-05-13 05:55:52.111170] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:43.902 [2024-05-13 05:55:52.111211] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.111252] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.111293] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.902 [2024-05-13 05:55:52.111334] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.902 [2024-05-13 05:55:52.111376] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.902 [2024-05-13 05:55:52.111410] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f6a564e94e73984d 00:03:43.902 passed 00:03:43.902 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-05-13 05:55:52.111448] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.902 [2024-05-13 05:55:52.111489] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:43.902 [2024-05-13 05:55:52.111531] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.111573] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.902 [2024-05-13 05:55:52.111614] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.902 [2024-05-13 05:55:52.111656] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.902 [2024-05-13 05:55:52.111698] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.902 [2024-05-13 05:55:52.111732] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=af58 00:03:43.903 [2024-05-13 05:55:52.111767] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.903 [2024-05-13 05:55:52.111809] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:43.903 [2024-05-13 05:55:52.111850] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.111891] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.111932] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.903 [2024-05-13 05:55:52.111973] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.903 [2024-05-13 05:55:52.112014] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.903 [2024-05-13 05:55:52.112049] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fcb74c3d 00:03:43.903 [2024-05-13 05:55:52.112083] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.903 [2024-05-13 05:55:52.112124] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:43.903 [2024-05-13 05:55:52.112166] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.112207] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.112249] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.903 [2024-05-13 05:55:52.112293] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.903 [2024-05-13 05:55:52.112336] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.903 [2024-05-13 05:55:52.112370] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f6a564e94e73984d 00:03:43.903 passed 00:03:43.903 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-05-13 05:55:52.112408] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.903 [2024-05-13 05:55:52.112449] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:43.903 [2024-05-13 05:55:52.112490] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.112531] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.112572] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.903 [2024-05-13 05:55:52.112613] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.903 [2024-05-13 05:55:52.112654] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.903 passed 00:03:43.903 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-05-13 05:55:52.112688] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=af58 00:03:43.903 [2024-05-13 05:55:52.112725] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.903 [2024-05-13 05:55:52.112766] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:43.903 [2024-05-13 05:55:52.112807] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.112849] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.112890] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.903 [2024-05-13 05:55:52.112931] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.903 [2024-05-13 05:55:52.112972] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.903 [2024-05-13 05:55:52.113006] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fcb74c3d 00:03:43.903 [2024-05-13 05:55:52.113040] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.903 [2024-05-13 05:55:52.113082] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:43.903 [2024-05-13 05:55:52.113123] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.113164] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.113206] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.903 [2024-05-13 05:55:52.113247] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.903 [2024-05-13 05:55:52.113287] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.903 passed 00:03:43.903 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-05-13 05:55:52.113322] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f6a564e94e73984d 00:03:43.903 [2024-05-13 05:55:52.113359] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.903 [2024-05-13 05:55:52.113400] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fc21, Actual=fe21 00:03:43.903 [2024-05-13 05:55:52.113443] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.113485] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.113527] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.903 [2024-05-13 05:55:52.113569] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.903 [2024-05-13 05:55:52.113613] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.903 [2024-05-13 05:55:52.113648] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=af58 00:03:43.903 passed 00:03:43.903 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-05-13 05:55:52.113685] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.903 [2024-05-13 05:55:52.113727] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3a574660, Actual=38574660 00:03:43.903 [2024-05-13 05:55:52.113769] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.113810] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.113852] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.903 [2024-05-13 05:55:52.113893] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.903 [2024-05-13 05:55:52.113935] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.903 [2024-05-13 05:55:52.113969] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=fcb74c3d 00:03:43.903 [2024-05-13 05:55:52.114004] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.903 [2024-05-13 05:55:52.114046] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=8a010a2d4837a266, Actual=88010a2d4837a266 00:03:43.903 [2024-05-13 05:55:52.114088] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.114129] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.114171] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.903 [2024-05-13 05:55:52.114212] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.903 [2024-05-13 05:55:52.114254] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.903 [2024-05-13 05:55:52.114301] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f6a564e94e73984d 00:03:43.903 passed 00:03:43.903 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:43.903 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:43.903 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:43.903 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.903 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:43.903 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:43.903 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.903 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:43.903 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.903 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-13 05:55:52.118273] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff4c, Actual=fd4c 00:03:43.903 [2024-05-13 05:55:52.118399] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fb45, Actual=f945 00:03:43.903 [2024-05-13 05:55:52.118515] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.118635] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.903 [2024-05-13 05:55:52.118757] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=200005c 00:03:43.903 [2024-05-13 05:55:52.118871] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=200005c 00:03:43.903 [2024-05-13 05:55:52.118986] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=25a7 00:03:43.904 [2024-05-13 05:55:52.119101] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=1fee 00:03:43.904 [2024-05-13 05:55:52.119216] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=18b753ed, Actual=1ab753ed 00:03:43.904 [2024-05-13 05:55:52.119383] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=7c944d39, Actual=7e944d39 00:03:43.904 [2024-05-13 05:55:52.119498] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.119611] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.119730] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=20000000000005c 00:03:43.904 [2024-05-13 05:55:52.119844] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=20000000000005c 00:03:43.904 [2024-05-13 05:55:52.119958] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.904 [2024-05-13 05:55:52.120073] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=efc67f04 00:03:43.904 [2024-05-13 05:55:52.120188] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.904 [2024-05-13 05:55:52.120302] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=5a5ab969b38e44ac, Actual=585ab969b38e44ac 00:03:43.904 [2024-05-13 05:55:52.120416] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.120531] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.120645] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=25c 00:03:43.904 [2024-05-13 05:55:52.120760] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=25c 00:03:43.904 [2024-05-13 05:55:52.120873] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.904 [2024-05-13 05:55:52.120988] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=b3204b4c76718ed6 00:03:43.904 passed 00:03:43.904 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-13 05:55:52.121026] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.904 [2024-05-13 05:55:52.121055] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:03:43.904 [2024-05-13 05:55:52.121083] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.121112] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.121141] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.904 [2024-05-13 05:55:52.121169] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.904 [2024-05-13 05:55:52.121197] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.904 [2024-05-13 05:55:52.121226] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=7c6f 00:03:43.904 [2024-05-13 05:55:52.121254] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.904 [2024-05-13 05:55:52.121282] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:03:43.904 [2024-05-13 05:55:52.121310] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.121339] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.121367] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.904 [2024-05-13 05:55:52.121395] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.904 [2024-05-13 05:55:52.121423] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.904 [2024-05-13 05:55:52.121451] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=2e464af1 00:03:43.904 [2024-05-13 05:55:52.121479] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.904 [2024-05-13 05:55:52.121508] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:03:43.904 [2024-05-13 05:55:52.121536] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.121565] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.121594] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.904 [2024-05-13 05:55:52.121622] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.904 [2024-05-13 05:55:52.121650] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.904 [2024-05-13 05:55:52.121679] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=46c244df509e0a89 00:03:43.904 passed 00:03:43.904 Test: dix_sec_512_md_0_error ...passed 00:03:43.904 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-05-13 05:55:52.121687] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:43.904 passed 00:03:43.904 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:43.904 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:43.904 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:43.904 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:43.904 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:43.904 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:43.904 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:43.904 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:43.904 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-05-13 05:55:52.125270] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=ff4c, Actual=fd4c 00:03:43.904 [2024-05-13 05:55:52.125380] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fb45, Actual=f945 00:03:43.904 [2024-05-13 05:55:52.125482] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.125583] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.125685] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=200005c 00:03:43.904 [2024-05-13 05:55:52.125795] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=200005c 00:03:43.904 [2024-05-13 05:55:52.125922] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=25a7 00:03:43.904 [2024-05-13 05:55:52.126038] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=1fee 00:03:43.904 [2024-05-13 05:55:52.126151] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=18b753ed, Actual=1ab753ed 00:03:43.904 [2024-05-13 05:55:52.126272] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=7c944d39, Actual=7e944d39 00:03:43.904 [2024-05-13 05:55:52.126395] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.126507] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.904 [2024-05-13 05:55:52.126620] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=20000000000005c 00:03:43.904 [2024-05-13 05:55:52.126732] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=20000000000005c 00:03:43.904 [2024-05-13 05:55:52.126844] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.905 [2024-05-13 05:55:52.126957] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=efc67f04 00:03:43.905 [2024-05-13 05:55:52.127070] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.905 [2024-05-13 05:55:52.127205] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=5a5ab969b38e44ac, Actual=585ab969b38e44ac 00:03:43.905 [2024-05-13 05:55:52.127320] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.127434] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.127548] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=25c 00:03:43.905 [2024-05-13 05:55:52.127654] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=25c 00:03:43.905 [2024-05-13 05:55:52.127755] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.905 passed 00:03:43.905 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-05-13 05:55:52.127856] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=b3204b4c76718ed6 00:03:43.905 [2024-05-13 05:55:52.127889] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ff4c, Actual=fd4c 00:03:43.905 [2024-05-13 05:55:52.127915] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=98c4, Actual=9ac4 00:03:43.905 [2024-05-13 05:55:52.127941] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.127966] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.127992] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.905 [2024-05-13 05:55:52.128017] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2000058 00:03:43.905 [2024-05-13 05:55:52.128060] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=25a7 00:03:43.905 [2024-05-13 05:55:52.128089] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=7c6f 00:03:43.905 [2024-05-13 05:55:52.128118] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=18b753ed, Actual=1ab753ed 00:03:43.905 [2024-05-13 05:55:52.128145] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd1478cc, Actual=bf1478cc 00:03:43.905 [2024-05-13 05:55:52.128173] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.128200] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.128227] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.905 [2024-05-13 05:55:52.128269] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000000058 00:03:43.905 [2024-05-13 05:55:52.128297] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=fe79ea3d 00:03:43.905 [2024-05-13 05:55:52.128324] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=2e464af1 00:03:43.905 [2024-05-13 05:55:52.128352] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a776a7728ecc20d3, Actual=a576a7728ecc20d3 00:03:43.905 [2024-05-13 05:55:52.128381] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=afb8b6fa9561c0f3, Actual=adb8b6fa9561c0f3 00:03:43.905 [2024-05-13 05:55:52.128409] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.128437] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=288 00:03:43.905 [2024-05-13 05:55:52.128466] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.905 [2024-05-13 05:55:52.128494] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=258 00:03:43.905 [2024-05-13 05:55:52.128522] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4e41cad872ddf061 00:03:43.905 passed 00:03:43.905 Test: set_md_interleave_iovs_test ...[2024-05-13 05:55:52.128550] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=46c244df509e0a89 00:03:43.905 passed 00:03:43.905 Test: set_md_interleave_iovs_split_test ...passed 00:03:43.905 Test: dif_generate_stream_pi_16_test ...passed 00:03:43.905 Test: dif_generate_stream_test ...passed 00:03:43.905 Test: set_md_interleave_iovs_alignment_test ...passed 00:03:43.905 Test: dif_generate_split_test ...[2024-05-13 05:55:52.129167] /usr/home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:43.905 passed 00:03:43.905 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:43.905 Test: dif_verify_split_test ...passed 00:03:43.905 Test: dif_verify_stream_multi_segments_test ...passed 00:03:43.905 Test: update_crc32c_pi_16_test ...passed 00:03:43.905 Test: update_crc32c_test ...passed 00:03:43.905 Test: dif_update_crc32c_split_test ...passed 00:03:43.905 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:43.905 Test: get_range_with_md_test ...passed 00:03:43.905 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:43.905 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:43.905 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:43.905 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:43.905 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:43.905 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:43.905 Test: dif_generate_and_verify_unmap_test ...passed 00:03:43.905 00:03:43.905 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.905 suites 1 1 n/a 0 0 00:03:43.905 tests 79 79 79 0 0 00:03:43.905 asserts 3584 3584 3584 0 n/a 00:03:43.905 00:03:43.905 Elapsed time = 0.047 seconds 00:03:43.905 05:55:52 -- unit/unittest.sh@141 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:43.905 00:03:43.905 00:03:43.905 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.905 http://cunit.sourceforge.net/ 00:03:43.905 00:03:43.905 00:03:43.905 Suite: iov 00:03:43.905 Test: test_single_iov ...passed 00:03:43.905 Test: test_simple_iov ...passed 00:03:43.905 Test: test_complex_iov ...passed 00:03:43.905 Test: test_iovs_to_buf ...passed 00:03:43.905 Test: test_buf_to_iovs ...passed 00:03:43.905 Test: test_memset ...passed 00:03:43.905 Test: test_iov_one ...passed 00:03:43.905 Test: test_iov_xfer ...passed 00:03:43.905 00:03:43.905 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.905 suites 1 1 n/a 0 0 00:03:43.905 tests 8 8 8 0 0 00:03:43.905 asserts 156 156 156 0 n/a 00:03:43.905 00:03:43.905 Elapsed time = 0.000 seconds 00:03:43.905 05:55:52 -- unit/unittest.sh@142 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:43.905 00:03:43.905 00:03:43.905 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.905 http://cunit.sourceforge.net/ 00:03:43.905 00:03:43.905 00:03:43.905 Suite: math 00:03:43.905 Test: test_serial_number_arithmetic ...passed 00:03:43.905 Suite: erase 00:03:43.905 Test: test_memset_s ...passed 00:03:43.905 00:03:43.905 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.905 suites 2 2 n/a 0 0 00:03:43.905 tests 2 2 2 0 0 00:03:43.905 asserts 18 18 18 0 n/a 00:03:43.905 00:03:43.905 Elapsed time = 0.000 seconds 00:03:43.905 05:55:52 -- unit/unittest.sh@143 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:43.905 00:03:43.905 00:03:43.905 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.905 http://cunit.sourceforge.net/ 00:03:43.905 00:03:43.905 00:03:43.905 Suite: pipe 00:03:43.905 Test: test_create_destroy ...passed 00:03:43.905 Test: test_write_get_buffer ...passed 00:03:43.905 Test: test_write_advance ...passed 00:03:43.905 Test: test_read_get_buffer ...passed 00:03:43.905 Test: test_read_advance ...passed 00:03:43.905 Test: test_data ...passed 00:03:43.905 00:03:43.905 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.905 suites 1 1 n/a 0 0 00:03:43.905 tests 6 6 6 0 0 00:03:43.905 asserts 250 250 250 0 n/a 00:03:43.905 00:03:43.905 Elapsed time = 0.000 seconds 00:03:43.905 05:55:52 -- unit/unittest.sh@144 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:43.905 00:03:43.905 00:03:43.905 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.905 http://cunit.sourceforge.net/ 00:03:43.905 00:03:43.905 00:03:43.905 Suite: xor 00:03:43.905 Test: test_xor_gen ...passed 00:03:43.905 00:03:43.905 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.905 suites 1 1 n/a 0 0 00:03:43.905 tests 1 1 1 0 0 00:03:43.905 asserts 17 17 17 0 n/a 00:03:43.905 00:03:43.905 Elapsed time = 0.000 seconds 00:03:43.905 00:03:43.905 real 0m0.145s 00:03:43.905 user 0m0.119s 00:03:43.905 sys 0m0.028s 00:03:43.905 05:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.905 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:43.905 ************************************ 00:03:43.905 END TEST unittest_util 00:03:43.905 ************************************ 00:03:43.905 05:55:52 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /usr/home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:43.905 05:55:52 -- unit/unittest.sh@285 -- # run_test unittest_dma /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:43.905 05:55:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:43.905 05:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:43.905 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:43.906 ************************************ 00:03:43.906 START TEST unittest_dma 00:03:43.906 ************************************ 00:03:43.906 05:55:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:43.906 00:03:43.906 00:03:43.906 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.906 http://cunit.sourceforge.net/ 00:03:43.906 00:03:43.906 00:03:43.906 Suite: dma_suite 00:03:43.906 Test: test_dma ...[2024-05-13 05:55:52.204896] /usr/home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:43.906 passed 00:03:43.906 00:03:43.906 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.906 suites 1 1 n/a 0 0 00:03:43.906 tests 1 1 1 0 0 00:03:43.906 asserts 50 50 50 0 n/a 00:03:43.906 00:03:43.906 Elapsed time = 0.000 seconds 00:03:43.906 00:03:43.906 real 0m0.006s 00:03:43.906 user 0m0.000s 00:03:43.906 sys 0m0.008s 00:03:43.906 05:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:43.906 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:43.906 ************************************ 00:03:43.906 END TEST unittest_dma 00:03:43.906 ************************************ 00:03:44.165 05:55:52 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:03:44.165 05:55:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.165 05:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.165 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.165 ************************************ 00:03:44.165 START TEST unittest_init 00:03:44.165 ************************************ 00:03:44.165 05:55:52 -- common/autotest_common.sh@1104 -- # unittest_init 00:03:44.165 05:55:52 -- unit/unittest.sh@148 -- # /usr/home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:44.165 00:03:44.165 00:03:44.165 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.165 http://cunit.sourceforge.net/ 00:03:44.165 00:03:44.165 00:03:44.165 Suite: subsystem_suite 00:03:44.165 Test: subsystem_sort_test_depends_on_single ...passed 00:03:44.165 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:44.165 Test: subsystem_sort_test_missing_dependency ...[2024-05-13 05:55:52.254027] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:44.165 passed 00:03:44.165 00:03:44.165 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.165 suites 1 1 n/a 0 0 00:03:44.165 tests 3 3 3 0 0 00:03:44.165 asserts 20 20 20 0 n/a 00:03:44.165 00:03:44.165 Elapsed time = 0.000 seconds[2024-05-13 05:55:52.254450] /usr/home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:44.165 00:03:44.165 00:03:44.165 real 0m0.009s 00:03:44.165 user 0m0.008s 00:03:44.165 sys 0m0.001s 00:03:44.165 05:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.165 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.165 ************************************ 00:03:44.165 END TEST unittest_init 00:03:44.165 ************************************ 00:03:44.166 05:55:52 -- unit/unittest.sh@289 -- # '[' no = yes ']' 00:03:44.166 05:55:52 -- unit/unittest.sh@302 -- # set +x 00:03:44.166 00:03:44.166 00:03:44.166 ===================== 00:03:44.166 All unit tests passed 00:03:44.166 ===================== 00:03:44.166 WARN: lcov not installed or SPDK built without coverage! 00:03:44.166 WARN: neither valgrind nor ASAN is enabled! 00:03:44.166 00:03:44.166 00:03:44.166 00:03:44.166 real 0m13.653s 00:03:44.166 user 0m10.781s 00:03:44.166 sys 0m1.710s 00:03:44.166 05:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.166 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.166 ************************************ 00:03:44.166 END TEST unittest 00:03:44.166 ************************************ 00:03:44.166 05:55:52 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:44.166 05:55:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:44.166 05:55:52 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:44.166 05:55:52 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:44.166 05:55:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:44.166 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.166 05:55:52 -- spdk/autotest.sh@175 -- # run_test env /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:44.166 05:55:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.166 05:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.166 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.166 ************************************ 00:03:44.166 START TEST env 00:03:44.166 ************************************ 00:03:44.166 05:55:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:44.426 * Looking for test storage... 00:03:44.426 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/env 00:03:44.426 05:55:52 -- env/env.sh@10 -- # run_test env_memory /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:44.426 05:55:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.426 05:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.426 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.426 ************************************ 00:03:44.426 START TEST env_memory 00:03:44.426 ************************************ 00:03:44.426 05:55:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:44.426 00:03:44.426 00:03:44.426 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.426 http://cunit.sourceforge.net/ 00:03:44.426 00:03:44.426 00:03:44.426 Suite: memory 00:03:44.426 Test: alloc and free memory map ...[2024-05-13 05:55:52.557132] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.426 passed 00:03:44.426 Test: mem map translation ...[2024-05-13 05:55:52.565951] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.426 [2024-05-13 05:55:52.565996] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.426 [2024-05-13 05:55:52.566013] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.426 [2024-05-13 05:55:52.566023] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.426 passed 00:03:44.426 Test: mem map registration ...[2024-05-13 05:55:52.574292] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:44.426 [2024-05-13 05:55:52.574322] /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:44.426 passed 00:03:44.426 Test: mem map adjacent registrations ...passed 00:03:44.426 00:03:44.426 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.426 suites 1 1 n/a 0 0 00:03:44.426 tests 4 4 4 0 0 00:03:44.426 asserts 152 152 152 0 n/a 00:03:44.426 00:03:44.426 Elapsed time = 0.031 seconds 00:03:44.426 00:03:44.426 real 0m0.047s 00:03:44.426 user 0m0.040s 00:03:44.426 sys 0m0.008s 00:03:44.426 05:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.426 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.426 ************************************ 00:03:44.426 END TEST env_memory 00:03:44.426 ************************************ 00:03:44.426 05:55:52 -- env/env.sh@11 -- # run_test env_vtophys /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:44.426 05:55:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.426 05:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.426 05:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.426 ************************************ 00:03:44.426 START TEST env_vtophys 00:03:44.426 ************************************ 00:03:44.426 05:55:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:44.426 EAL: lib.eal log level changed from notice to debug 00:03:44.426 EAL: Sysctl reports 10 cpus 00:03:44.426 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 1 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 2 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 3 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 4 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 5 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 6 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 7 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 8 as core 0 on socket 0 00:03:44.426 EAL: Detected lcore 9 as core 0 on socket 0 00:03:44.426 EAL: Maximum logical cores by configuration: 128 00:03:44.426 EAL: Detected CPU lcores: 10 00:03:44.426 EAL: Detected NUMA nodes: 1 00:03:44.426 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:44.426 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:44.426 EAL: Checking presence of .so 'librte_eal.so' 00:03:44.426 EAL: Detected static linkage of DPDK 00:03:44.426 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.426 EAL: PCI scan found 10 devices 00:03:44.426 EAL: Specific IOVA mode is not requested, autodetecting 00:03:44.426 EAL: Selecting IOVA mode according to bus requests 00:03:44.426 EAL: Bus pci wants IOVA as 'PA' 00:03:44.426 EAL: Selected IOVA mode 'PA' 00:03:44.426 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:44.426 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.426 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x10009a8000) not respected! 00:03:44.426 EAL: This may cause issues with mapping memory into secondary processes 00:03:44.426 EAL: Virtual area found at 0x10009a8000 (size = 0x2e000) 00:03:44.426 EAL: Setting up physically contiguous memory... 00:03:44.426 EAL: Ask a virtual area of 0x1000 bytes 00:03:44.426 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000a4a000) not respected! 00:03:44.426 EAL: This may cause issues with mapping memory into secondary processes 00:03:44.426 EAL: Virtual area found at 0x1000a4a000 (size = 0x1000) 00:03:44.426 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:44.426 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:44.426 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:44.426 EAL: This may cause issues with mapping memory into secondary processes 00:03:44.426 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:44.426 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:44.426 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x1b0000000, len 268435456 00:03:44.686 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x1c0000000, len 268435456 00:03:44.686 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x1d0000000, len 268435456 00:03:44.686 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x1e0000000, len 268435456 00:03:44.686 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x1f0000000, len 268435456 00:03:44.686 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0x210000000, len 268435456 00:03:44.945 EAL: Mapped memory segment 6 @ 0x10b0000000: physaddr:0x220000000, len 268435456 00:03:44.945 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x230000000, len 268435456 00:03:44.945 EAL: No shared files mode enabled, IPC is disabled 00:03:44.945 EAL: Added 2048M to heap on socket 0 00:03:44.945 EAL: TSC is not safe to use in SMP mode 00:03:44.945 EAL: TSC is not invariant 00:03:44.945 EAL: TSC frequency is ~2294600 KHz 00:03:44.945 EAL: Main lcore 0 is ready (tid=82d412000;cpuset=[0]) 00:03:44.945 EAL: PCI scan found 10 devices 00:03:44.945 EAL: Registering mem event callbacks not supported 00:03:44.945 00:03:44.945 00:03:44.945 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.945 http://cunit.sourceforge.net/ 00:03:44.945 00:03:44.945 00:03:44.945 Suite: components_suite 00:03:44.945 Test: vtophys_malloc_test ...passed 00:03:45.205 Test: vtophys_spdk_malloc_test ...passed 00:03:45.205 00:03:45.205 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.205 suites 1 1 n/a 0 0 00:03:45.205 tests 2 2 2 0 0 00:03:45.205 asserts 539 539 539 0 n/a 00:03:45.205 00:03:45.205 Elapsed time = 0.297 seconds 00:03:45.205 00:03:45.205 real 0m0.779s 00:03:45.205 user 0m0.315s 00:03:45.205 sys 0m0.462s 00:03:45.205 05:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.205 05:55:53 -- common/autotest_common.sh@10 -- # set +x 00:03:45.205 ************************************ 00:03:45.205 END TEST env_vtophys 00:03:45.205 ************************************ 00:03:45.205 05:55:53 -- env/env.sh@12 -- # run_test env_pci /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.205 05:55:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.205 05:55:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.205 05:55:53 -- common/autotest_common.sh@10 -- # set +x 00:03:45.205 ************************************ 00:03:45.205 START TEST env_pci 00:03:45.205 ************************************ 00:03:45.205 05:55:53 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:45.205 00:03:45.205 00:03:45.205 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.205 http://cunit.sourceforge.net/ 00:03:45.205 00:03:45.205 00:03:45.205 Suite: pci 00:03:45.205 Test: pci_hook ...passed 00:03:45.205 00:03:45.205 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.205 suites 1 1 n/a 0 0 00:03:45.205 tests 1 1 1 0 0 00:03:45.205 asserts 25 25 25 0 n/a 00:03:45.205 00:03:45.205 Elapsed time = 0.008 seconds 00:03:45.205 EAL: Cannot find device (10000:00:01.0) 00:03:45.205 EAL: Failed to attach device on primary process 00:03:45.205 00:03:45.205 real 0m0.012s 00:03:45.205 user 0m0.001s 00:03:45.205 sys 0m0.013s 00:03:45.205 05:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.205 05:55:53 -- common/autotest_common.sh@10 -- # set +x 00:03:45.205 ************************************ 00:03:45.205 END TEST env_pci 00:03:45.205 ************************************ 00:03:45.205 05:55:53 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.205 05:55:53 -- env/env.sh@15 -- # uname 00:03:45.465 05:55:53 -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:45.465 05:55:53 -- env/env.sh@24 -- # run_test env_dpdk_post_init /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:45.465 05:55:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:03:45.465 05:55:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.465 05:55:53 -- common/autotest_common.sh@10 -- # set +x 00:03:45.465 ************************************ 00:03:45.465 START TEST env_dpdk_post_init 00:03:45.465 ************************************ 00:03:45.465 05:55:53 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:45.465 EAL: Sysctl reports 10 cpus 00:03:45.465 EAL: Detected CPU lcores: 10 00:03:45.465 EAL: Detected NUMA nodes: 1 00:03:45.465 EAL: Detected static linkage of DPDK 00:03:45.465 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.465 EAL: Selected IOVA mode 'PA' 00:03:45.465 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:45.465 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x1b0000000, len 268435456 00:03:45.465 EAL: Mapped memory segment 1 @ 0x1070000000: physaddr:0x1c0000000, len 268435456 00:03:45.465 EAL: Mapped memory segment 2 @ 0x1080000000: physaddr:0x1d0000000, len 268435456 00:03:45.465 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x1e0000000, len 268435456 00:03:45.724 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x1f0000000, len 268435456 00:03:45.724 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0x210000000, len 268435456 00:03:45.724 EAL: Mapped memory segment 6 @ 0x10b0000000: physaddr:0x220000000, len 268435456 00:03:45.724 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x230000000, len 268435456 00:03:45.724 EAL: TSC is not safe to use in SMP mode 00:03:45.724 EAL: TSC is not invariant 00:03:45.724 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.724 [2024-05-13 05:55:53.972728] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:45.724 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:03:45.724 Starting DPDK initialization... 00:03:45.724 Starting SPDK post initialization... 00:03:45.724 SPDK NVMe probe 00:03:45.724 Attaching to 0000:00:06.0 00:03:45.724 Attached to 0000:00:06.0 00:03:45.724 Cleaning up... 00:03:45.724 00:03:45.724 real 0m0.493s 00:03:45.724 user 0m0.012s 00:03:45.724 sys 0m0.476s 00:03:45.724 05:55:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.724 05:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:45.724 ************************************ 00:03:45.724 END TEST env_dpdk_post_init 00:03:45.724 ************************************ 00:03:45.984 05:55:54 -- env/env.sh@26 -- # uname 00:03:45.984 05:55:54 -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:45.984 00:03:45.984 real 0m1.734s 00:03:45.984 user 0m0.577s 00:03:45.984 sys 0m1.197s 00:03:45.984 05:55:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.984 05:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:45.984 ************************************ 00:03:45.984 END TEST env 00:03:45.984 ************************************ 00:03:45.984 05:55:54 -- spdk/autotest.sh@176 -- # run_test rpc /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:45.984 05:55:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.984 05:55:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.984 05:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:45.984 ************************************ 00:03:45.984 START TEST rpc 00:03:45.984 ************************************ 00:03:45.984 05:55:54 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:46.244 * Looking for test storage... 00:03:46.244 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:46.244 05:55:54 -- rpc/rpc.sh@64 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:46.244 05:55:54 -- rpc/rpc.sh@65 -- # spdk_pid=45204 00:03:46.244 05:55:54 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.244 05:55:54 -- rpc/rpc.sh@67 -- # waitforlisten 45204 00:03:46.244 05:55:54 -- common/autotest_common.sh@819 -- # '[' -z 45204 ']' 00:03:46.244 05:55:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.244 05:55:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:46.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.244 05:55:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.244 05:55:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:46.244 05:55:54 -- common/autotest_common.sh@10 -- # set +x 00:03:46.244 [2024-05-13 05:55:54.318500] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:46.244 [2024-05-13 05:55:54.318734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:46.503 EAL: TSC is not safe to use in SMP mode 00:03:46.503 EAL: TSC is not invariant 00:03:46.503 [2024-05-13 05:55:54.752571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.763 [2024-05-13 05:55:54.841776] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:46.763 [2024-05-13 05:55:54.841857] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:46.763 [2024-05-13 05:55:54.841864] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45204' to capture a snapshot of events at runtime. 00:03:46.763 [2024-05-13 05:55:54.841881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.023 05:55:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:47.023 05:55:55 -- common/autotest_common.sh@852 -- # return 0 00:03:47.023 05:55:55 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.023 05:55:55 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.023 05:55:55 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.023 05:55:55 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.023 05:55:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.023 05:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.023 ************************************ 00:03:47.023 START TEST rpc_integrity 00:03:47.023 ************************************ 00:03:47.023 05:55:55 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:47.023 05:55:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.023 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.023 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.023 05:55:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.023 05:55:55 -- rpc/rpc.sh@13 -- # jq length 00:03:47.023 05:55:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.023 05:55:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.023 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.023 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.023 05:55:55 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.023 05:55:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.023 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.023 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.023 05:55:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.023 { 00:03:47.023 "name": "Malloc0", 00:03:47.023 "aliases": [ 00:03:47.023 "76798996-10ed-11ef-ba60-3508ead7bdda" 00:03:47.023 ], 00:03:47.023 "product_name": "Malloc disk", 00:03:47.023 "block_size": 512, 00:03:47.023 "num_blocks": 16384, 00:03:47.023 "uuid": "76798996-10ed-11ef-ba60-3508ead7bdda", 00:03:47.023 "assigned_rate_limits": { 00:03:47.023 "rw_ios_per_sec": 0, 00:03:47.023 "rw_mbytes_per_sec": 0, 00:03:47.023 "r_mbytes_per_sec": 0, 00:03:47.023 "w_mbytes_per_sec": 0 00:03:47.023 }, 00:03:47.023 "claimed": false, 00:03:47.023 "zoned": false, 00:03:47.023 "supported_io_types": { 00:03:47.023 "read": true, 00:03:47.023 "write": true, 00:03:47.023 "unmap": true, 00:03:47.023 "write_zeroes": true, 00:03:47.023 "flush": true, 00:03:47.023 "reset": true, 00:03:47.023 "compare": false, 00:03:47.023 "compare_and_write": false, 00:03:47.023 "abort": true, 00:03:47.023 "nvme_admin": false, 00:03:47.023 "nvme_io": false 00:03:47.023 }, 00:03:47.023 "memory_domains": [ 00:03:47.023 { 00:03:47.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.023 "dma_device_type": 2 00:03:47.023 } 00:03:47.023 ], 00:03:47.023 "driver_specific": {} 00:03:47.023 } 00:03:47.023 ]' 00:03:47.023 05:55:55 -- rpc/rpc.sh@17 -- # jq length 00:03:47.023 05:55:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.023 05:55:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.023 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.023 [2024-05-13 05:55:55.318462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.023 [2024-05-13 05:55:55.318501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.023 [2024-05-13 05:55:55.318999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb0f780 00:03:47.023 [2024-05-13 05:55:55.319021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.023 [2024-05-13 05:55:55.319579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.023 [2024-05-13 05:55:55.319607] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.023 Passthru0 00:03:47.023 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.023 05:55:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.023 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.023 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.282 { 00:03:47.282 "name": "Malloc0", 00:03:47.282 "aliases": [ 00:03:47.282 "76798996-10ed-11ef-ba60-3508ead7bdda" 00:03:47.282 ], 00:03:47.282 "product_name": "Malloc disk", 00:03:47.282 "block_size": 512, 00:03:47.282 "num_blocks": 16384, 00:03:47.282 "uuid": "76798996-10ed-11ef-ba60-3508ead7bdda", 00:03:47.282 "assigned_rate_limits": { 00:03:47.282 "rw_ios_per_sec": 0, 00:03:47.282 "rw_mbytes_per_sec": 0, 00:03:47.282 "r_mbytes_per_sec": 0, 00:03:47.282 "w_mbytes_per_sec": 0 00:03:47.282 }, 00:03:47.282 "claimed": true, 00:03:47.282 "claim_type": "exclusive_write", 00:03:47.282 "zoned": false, 00:03:47.282 "supported_io_types": { 00:03:47.282 "read": true, 00:03:47.282 "write": true, 00:03:47.282 "unmap": true, 00:03:47.282 "write_zeroes": true, 00:03:47.282 "flush": true, 00:03:47.282 "reset": true, 00:03:47.282 "compare": false, 00:03:47.282 "compare_and_write": false, 00:03:47.282 "abort": true, 00:03:47.282 "nvme_admin": false, 00:03:47.282 "nvme_io": false 00:03:47.282 }, 00:03:47.282 "memory_domains": [ 00:03:47.282 { 00:03:47.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.282 "dma_device_type": 2 00:03:47.282 } 00:03:47.282 ], 00:03:47.282 "driver_specific": {} 00:03:47.282 }, 00:03:47.282 { 00:03:47.282 "name": "Passthru0", 00:03:47.282 "aliases": [ 00:03:47.282 "321c9a5f-8a30-d755-a2ab-fbe741e2c61d" 00:03:47.282 ], 00:03:47.282 "product_name": "passthru", 00:03:47.282 "block_size": 512, 00:03:47.282 "num_blocks": 16384, 00:03:47.282 "uuid": "321c9a5f-8a30-d755-a2ab-fbe741e2c61d", 00:03:47.282 "assigned_rate_limits": { 00:03:47.282 "rw_ios_per_sec": 0, 00:03:47.282 "rw_mbytes_per_sec": 0, 00:03:47.282 "r_mbytes_per_sec": 0, 00:03:47.282 "w_mbytes_per_sec": 0 00:03:47.282 }, 00:03:47.282 "claimed": false, 00:03:47.282 "zoned": false, 00:03:47.282 "supported_io_types": { 00:03:47.282 "read": true, 00:03:47.282 "write": true, 00:03:47.282 "unmap": true, 00:03:47.282 "write_zeroes": true, 00:03:47.282 "flush": true, 00:03:47.282 "reset": true, 00:03:47.282 "compare": false, 00:03:47.282 "compare_and_write": false, 00:03:47.282 "abort": true, 00:03:47.282 "nvme_admin": false, 00:03:47.282 "nvme_io": false 00:03:47.282 }, 00:03:47.282 "memory_domains": [ 00:03:47.282 { 00:03:47.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.282 "dma_device_type": 2 00:03:47.282 } 00:03:47.282 ], 00:03:47.282 "driver_specific": { 00:03:47.282 "passthru": { 00:03:47.282 "name": "Passthru0", 00:03:47.282 "base_bdev_name": "Malloc0" 00:03:47.282 } 00:03:47.282 } 00:03:47.282 } 00:03:47.282 ]' 00:03:47.282 05:55:55 -- rpc/rpc.sh@21 -- # jq length 00:03:47.282 05:55:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.282 05:55:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.282 05:55:55 -- rpc/rpc.sh@26 -- # jq length 00:03:47.282 05:55:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.282 00:03:47.282 real 0m0.168s 00:03:47.282 user 0m0.040s 00:03:47.282 sys 0m0.059s 00:03:47.282 05:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 ************************************ 00:03:47.282 END TEST rpc_integrity 00:03:47.282 ************************************ 00:03:47.282 05:55:55 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.282 05:55:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.282 05:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 ************************************ 00:03:47.282 START TEST rpc_plugins 00:03:47.282 ************************************ 00:03:47.282 05:55:55 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:03:47.282 05:55:55 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.282 05:55:55 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.282 { 00:03:47.282 "name": "Malloc1", 00:03:47.282 "aliases": [ 00:03:47.282 "7696d4e5-10ed-11ef-ba60-3508ead7bdda" 00:03:47.282 ], 00:03:47.282 "product_name": "Malloc disk", 00:03:47.282 "block_size": 4096, 00:03:47.282 "num_blocks": 256, 00:03:47.282 "uuid": "7696d4e5-10ed-11ef-ba60-3508ead7bdda", 00:03:47.282 "assigned_rate_limits": { 00:03:47.282 "rw_ios_per_sec": 0, 00:03:47.282 "rw_mbytes_per_sec": 0, 00:03:47.282 "r_mbytes_per_sec": 0, 00:03:47.282 "w_mbytes_per_sec": 0 00:03:47.282 }, 00:03:47.282 "claimed": false, 00:03:47.282 "zoned": false, 00:03:47.282 "supported_io_types": { 00:03:47.282 "read": true, 00:03:47.282 "write": true, 00:03:47.282 "unmap": true, 00:03:47.282 "write_zeroes": true, 00:03:47.282 "flush": true, 00:03:47.282 "reset": true, 00:03:47.282 "compare": false, 00:03:47.282 "compare_and_write": false, 00:03:47.282 "abort": true, 00:03:47.282 "nvme_admin": false, 00:03:47.282 "nvme_io": false 00:03:47.282 }, 00:03:47.282 "memory_domains": [ 00:03:47.282 { 00:03:47.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.282 "dma_device_type": 2 00:03:47.282 } 00:03:47.282 ], 00:03:47.282 "driver_specific": {} 00:03:47.282 } 00:03:47.282 ]' 00:03:47.282 05:55:55 -- rpc/rpc.sh@32 -- # jq length 00:03:47.282 05:55:55 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.282 05:55:55 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.282 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.282 05:55:55 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.282 05:55:55 -- rpc/rpc.sh@36 -- # jq length 00:03:47.282 05:55:55 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.282 00:03:47.282 real 0m0.087s 00:03:47.282 user 0m0.023s 00:03:47.282 sys 0m0.026s 00:03:47.282 05:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 ************************************ 00:03:47.282 END TEST rpc_plugins 00:03:47.282 ************************************ 00:03:47.282 05:55:55 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.282 05:55:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.282 05:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.282 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.282 ************************************ 00:03:47.282 START TEST rpc_trace_cmd_test 00:03:47.282 ************************************ 00:03:47.541 05:55:55 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:03:47.541 05:55:55 -- rpc/rpc.sh@40 -- # local info 00:03:47.541 05:55:55 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:47.541 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.541 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.541 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.541 05:55:55 -- rpc/rpc.sh@42 -- # info='{ 00:03:47.541 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45204", 00:03:47.541 "tpoint_group_mask": "0x8", 00:03:47.541 "iscsi_conn": { 00:03:47.541 "mask": "0x2", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "scsi": { 00:03:47.541 "mask": "0x4", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "bdev": { 00:03:47.541 "mask": "0x8", 00:03:47.541 "tpoint_mask": "0xffffffffffffffff" 00:03:47.541 }, 00:03:47.541 "nvmf_rdma": { 00:03:47.541 "mask": "0x10", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "nvmf_tcp": { 00:03:47.541 "mask": "0x20", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "blobfs": { 00:03:47.541 "mask": "0x80", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "dsa": { 00:03:47.541 "mask": "0x200", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "thread": { 00:03:47.541 "mask": "0x400", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "nvme_pcie": { 00:03:47.541 "mask": "0x800", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "iaa": { 00:03:47.541 "mask": "0x1000", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "nvme_tcp": { 00:03:47.541 "mask": "0x2000", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 }, 00:03:47.541 "bdev_nvme": { 00:03:47.541 "mask": "0x4000", 00:03:47.541 "tpoint_mask": "0x0" 00:03:47.541 } 00:03:47.541 }' 00:03:47.541 05:55:55 -- rpc/rpc.sh@43 -- # jq length 00:03:47.541 05:55:55 -- rpc/rpc.sh@43 -- # '[' 14 -gt 2 ']' 00:03:47.541 05:55:55 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:47.541 05:55:55 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:47.541 05:55:55 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:47.541 05:55:55 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:47.541 05:55:55 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:47.542 05:55:55 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:47.542 05:55:55 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:47.542 05:55:55 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:47.542 00:03:47.542 real 0m0.079s 00:03:47.542 user 0m0.041s 00:03:47.542 sys 0m0.034s 00:03:47.542 05:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 ************************************ 00:03:47.542 END TEST rpc_trace_cmd_test 00:03:47.542 ************************************ 00:03:47.542 05:55:55 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:47.542 05:55:55 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:47.542 05:55:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.542 05:55:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 ************************************ 00:03:47.542 START TEST rpc_daemon_integrity 00:03:47.542 ************************************ 00:03:47.542 05:55:55 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:47.542 05:55:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.542 05:55:55 -- rpc/rpc.sh@13 -- # jq length 00:03:47.542 05:55:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.542 05:55:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:47.542 05:55:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.542 { 00:03:47.542 "name": "Malloc2", 00:03:47.542 "aliases": [ 00:03:47.542 "76c2c73c-10ed-11ef-ba60-3508ead7bdda" 00:03:47.542 ], 00:03:47.542 "product_name": "Malloc disk", 00:03:47.542 "block_size": 512, 00:03:47.542 "num_blocks": 16384, 00:03:47.542 "uuid": "76c2c73c-10ed-11ef-ba60-3508ead7bdda", 00:03:47.542 "assigned_rate_limits": { 00:03:47.542 "rw_ios_per_sec": 0, 00:03:47.542 "rw_mbytes_per_sec": 0, 00:03:47.542 "r_mbytes_per_sec": 0, 00:03:47.542 "w_mbytes_per_sec": 0 00:03:47.542 }, 00:03:47.542 "claimed": false, 00:03:47.542 "zoned": false, 00:03:47.542 "supported_io_types": { 00:03:47.542 "read": true, 00:03:47.542 "write": true, 00:03:47.542 "unmap": true, 00:03:47.542 "write_zeroes": true, 00:03:47.542 "flush": true, 00:03:47.542 "reset": true, 00:03:47.542 "compare": false, 00:03:47.542 "compare_and_write": false, 00:03:47.542 "abort": true, 00:03:47.542 "nvme_admin": false, 00:03:47.542 "nvme_io": false 00:03:47.542 }, 00:03:47.542 "memory_domains": [ 00:03:47.542 { 00:03:47.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.542 "dma_device_type": 2 00:03:47.542 } 00:03:47.542 ], 00:03:47.542 "driver_specific": {} 00:03:47.542 } 00:03:47.542 ]' 00:03:47.542 05:55:55 -- rpc/rpc.sh@17 -- # jq length 00:03:47.542 05:55:55 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.542 05:55:55 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 [2024-05-13 05:55:55.798478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:47.542 [2024-05-13 05:55:55.798534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.542 [2024-05-13 05:55:55.798557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bb0f780 00:03:47.542 [2024-05-13 05:55:55.798564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.542 [2024-05-13 05:55:55.798898] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.542 [2024-05-13 05:55:55.798932] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.542 Passthru0 00:03:47.542 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.542 { 00:03:47.542 "name": "Malloc2", 00:03:47.542 "aliases": [ 00:03:47.542 "76c2c73c-10ed-11ef-ba60-3508ead7bdda" 00:03:47.542 ], 00:03:47.542 "product_name": "Malloc disk", 00:03:47.542 "block_size": 512, 00:03:47.542 "num_blocks": 16384, 00:03:47.542 "uuid": "76c2c73c-10ed-11ef-ba60-3508ead7bdda", 00:03:47.542 "assigned_rate_limits": { 00:03:47.542 "rw_ios_per_sec": 0, 00:03:47.542 "rw_mbytes_per_sec": 0, 00:03:47.542 "r_mbytes_per_sec": 0, 00:03:47.542 "w_mbytes_per_sec": 0 00:03:47.542 }, 00:03:47.542 "claimed": true, 00:03:47.542 "claim_type": "exclusive_write", 00:03:47.542 "zoned": false, 00:03:47.542 "supported_io_types": { 00:03:47.542 "read": true, 00:03:47.542 "write": true, 00:03:47.542 "unmap": true, 00:03:47.542 "write_zeroes": true, 00:03:47.542 "flush": true, 00:03:47.542 "reset": true, 00:03:47.542 "compare": false, 00:03:47.542 "compare_and_write": false, 00:03:47.542 "abort": true, 00:03:47.542 "nvme_admin": false, 00:03:47.542 "nvme_io": false 00:03:47.542 }, 00:03:47.542 "memory_domains": [ 00:03:47.542 { 00:03:47.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.542 "dma_device_type": 2 00:03:47.542 } 00:03:47.542 ], 00:03:47.542 "driver_specific": {} 00:03:47.542 }, 00:03:47.542 { 00:03:47.542 "name": "Passthru0", 00:03:47.542 "aliases": [ 00:03:47.542 "03a988d9-d180-fc57-848e-86f0cfa4f31f" 00:03:47.542 ], 00:03:47.542 "product_name": "passthru", 00:03:47.542 "block_size": 512, 00:03:47.542 "num_blocks": 16384, 00:03:47.542 "uuid": "03a988d9-d180-fc57-848e-86f0cfa4f31f", 00:03:47.542 "assigned_rate_limits": { 00:03:47.542 "rw_ios_per_sec": 0, 00:03:47.542 "rw_mbytes_per_sec": 0, 00:03:47.542 "r_mbytes_per_sec": 0, 00:03:47.542 "w_mbytes_per_sec": 0 00:03:47.542 }, 00:03:47.542 "claimed": false, 00:03:47.542 "zoned": false, 00:03:47.542 "supported_io_types": { 00:03:47.542 "read": true, 00:03:47.542 "write": true, 00:03:47.542 "unmap": true, 00:03:47.542 "write_zeroes": true, 00:03:47.542 "flush": true, 00:03:47.542 "reset": true, 00:03:47.542 "compare": false, 00:03:47.542 "compare_and_write": false, 00:03:47.542 "abort": true, 00:03:47.542 "nvme_admin": false, 00:03:47.542 "nvme_io": false 00:03:47.542 }, 00:03:47.542 "memory_domains": [ 00:03:47.542 { 00:03:47.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.542 "dma_device_type": 2 00:03:47.542 } 00:03:47.542 ], 00:03:47.542 "driver_specific": { 00:03:47.542 "passthru": { 00:03:47.542 "name": "Passthru0", 00:03:47.542 "base_bdev_name": "Malloc2" 00:03:47.542 } 00:03:47.542 } 00:03:47.542 } 00:03:47.542 ]' 00:03:47.542 05:55:55 -- rpc/rpc.sh@21 -- # jq length 00:03:47.542 05:55:55 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.542 05:55:55 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.542 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.542 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.542 05:55:55 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:47.542 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.808 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.808 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.808 05:55:55 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.808 05:55:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:47.808 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.808 05:55:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:47.808 05:55:55 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.808 05:55:55 -- rpc/rpc.sh@26 -- # jq length 00:03:47.808 05:55:55 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.808 00:03:47.808 real 0m0.166s 00:03:47.808 user 0m0.066s 00:03:47.808 sys 0m0.034s 00:03:47.808 05:55:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.808 05:55:55 -- common/autotest_common.sh@10 -- # set +x 00:03:47.808 ************************************ 00:03:47.808 END TEST rpc_daemon_integrity 00:03:47.808 ************************************ 00:03:47.808 05:55:55 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:47.808 05:55:55 -- rpc/rpc.sh@84 -- # killprocess 45204 00:03:47.808 05:55:55 -- common/autotest_common.sh@926 -- # '[' -z 45204 ']' 00:03:47.808 05:55:55 -- common/autotest_common.sh@930 -- # kill -0 45204 00:03:47.808 05:55:55 -- common/autotest_common.sh@931 -- # uname 00:03:47.808 05:55:55 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:47.808 05:55:55 -- common/autotest_common.sh@934 -- # tail -1 00:03:47.808 05:55:55 -- common/autotest_common.sh@934 -- # ps -c -o command 45204 00:03:47.808 05:55:55 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:47.808 05:55:55 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:47.808 killing process with pid 45204 00:03:47.808 05:55:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45204' 00:03:47.808 05:55:55 -- common/autotest_common.sh@945 -- # kill 45204 00:03:47.808 05:55:55 -- common/autotest_common.sh@950 -- # wait 45204 00:03:48.076 00:03:48.076 real 0m2.027s 00:03:48.076 user 0m2.079s 00:03:48.076 sys 0m0.923s 00:03:48.076 05:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.076 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.076 ************************************ 00:03:48.076 END TEST rpc 00:03:48.076 ************************************ 00:03:48.076 05:55:56 -- spdk/autotest.sh@177 -- # run_test rpc_client /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:48.076 05:55:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.076 05:55:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.076 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.076 ************************************ 00:03:48.076 START TEST rpc_client 00:03:48.076 ************************************ 00:03:48.076 05:55:56 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:03:48.076 * Looking for test storage... 00:03:48.076 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/rpc_client 00:03:48.077 05:55:56 -- rpc_client/rpc_client.sh@10 -- # /usr/home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:03:48.336 OK 00:03:48.337 05:55:56 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:48.337 00:03:48.337 real 0m0.195s 00:03:48.337 user 0m0.132s 00:03:48.337 sys 0m0.129s 00:03:48.337 05:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.337 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.337 ************************************ 00:03:48.337 END TEST rpc_client 00:03:48.337 ************************************ 00:03:48.337 05:55:56 -- spdk/autotest.sh@178 -- # run_test json_config /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.337 05:55:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.337 05:55:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.337 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.337 ************************************ 00:03:48.337 START TEST json_config 00:03:48.337 ************************************ 00:03:48.337 05:55:56 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:03:48.337 05:55:56 -- json_config/json_config.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.337 05:55:56 -- nvmf/common.sh@7 -- # uname -s 00:03:48.337 05:55:56 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:48.337 05:55:56 -- nvmf/common.sh@7 -- # return 0 00:03:48.337 05:55:56 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.337 05:55:56 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.337 05:55:56 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:48.337 05:55:56 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.337 05:55:56 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:48.337 05:55:56 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.337 05:55:56 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:48.337 05:55:56 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:03:48.337 05:55:56 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:48.337 05:55:56 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:48.337 05:55:56 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.337 INFO: JSON configuration test init 00:03:48.337 05:55:56 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:48.337 05:55:56 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:48.337 05:55:56 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:48.337 05:55:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:48.337 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.337 05:55:56 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:48.337 05:55:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:48.337 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.337 05:55:56 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.337 05:55:56 -- json_config/json_config.sh@98 -- # local app=target 00:03:48.337 05:55:56 -- json_config/json_config.sh@99 -- # shift 00:03:48.337 05:55:56 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:48.337 05:55:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:48.337 05:55:56 -- json_config/json_config.sh@111 -- # app_pid[$app]=45411 00:03:48.337 Waiting for target to run... 00:03:48.337 05:55:56 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:48.337 05:55:56 -- json_config/json_config.sh@114 -- # waitforlisten 45411 /var/tmp/spdk_tgt.sock 00:03:48.337 05:55:56 -- common/autotest_common.sh@819 -- # '[' -z 45411 ']' 00:03:48.337 05:55:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.337 05:55:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.337 05:55:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.337 05:55:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:48.337 05:55:56 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.337 05:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:48.337 [2024-05-13 05:55:56.620756] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:48.337 [2024-05-13 05:55:56.621044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:48.597 EAL: TSC is not safe to use in SMP mode 00:03:48.597 EAL: TSC is not invariant 00:03:48.597 [2024-05-13 05:55:56.838427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.856 [2024-05-13 05:55:56.930099] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:48.856 [2024-05-13 05:55:56.930187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.426 05:55:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:49.427 05:55:57 -- common/autotest_common.sh@852 -- # return 0 00:03:49.427 00:03:49.427 05:55:57 -- json_config/json_config.sh@115 -- # echo '' 00:03:49.427 05:55:57 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:49.427 05:55:57 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:49.427 05:55:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:49.427 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:03:49.427 05:55:57 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:49.427 05:55:57 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:49.427 05:55:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:49.427 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:03:49.427 05:55:57 -- json_config/json_config.sh@326 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:49.427 05:55:57 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:49.427 05:55:57 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:49.687 [2024-05-13 05:55:57.838982] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:49.687 05:55:57 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:49.687 05:55:57 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:49.687 05:55:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:49.687 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:03:49.687 05:55:57 -- json_config/json_config.sh@48 -- # local ret=0 00:03:49.687 05:55:57 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:49.687 05:55:57 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:49.687 05:55:57 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:49.687 05:55:57 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:49.687 05:55:57 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:49.947 05:55:58 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:49.947 05:55:58 -- json_config/json_config.sh@51 -- # local get_types 00:03:49.947 05:55:58 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:49.947 05:55:58 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:49.947 05:55:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:49.947 05:55:58 -- common/autotest_common.sh@10 -- # set +x 00:03:49.947 05:55:58 -- json_config/json_config.sh@58 -- # return 0 00:03:49.947 05:55:58 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:03:49.947 05:55:58 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:03:49.947 05:55:58 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:03:49.947 05:55:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:49.947 05:55:58 -- common/autotest_common.sh@10 -- # set +x 00:03:49.947 05:55:58 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:03:49.947 05:55:58 -- json_config/json_config.sh@160 -- # local expected_notifications 00:03:49.947 05:55:58 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:03:49.947 05:55:58 -- json_config/json_config.sh@164 -- # get_notifications 00:03:49.947 05:55:58 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:03:49.947 05:55:58 -- json_config/json_config.sh@64 -- # IFS=: 00:03:49.947 05:55:58 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:49.947 05:55:58 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:03:49.947 05:55:58 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:49.947 05:55:58 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:50.206 05:55:58 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:03:50.206 05:55:58 -- json_config/json_config.sh@64 -- # IFS=: 00:03:50.206 05:55:58 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:50.206 05:55:58 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:03:50.206 05:55:58 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:03:50.206 05:55:58 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:03:50.206 05:55:58 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:03:50.206 Nvme0n1p0 Nvme0n1p1 00:03:50.206 05:55:58 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:03:50.206 05:55:58 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:03:50.466 [2024-05-13 05:55:58.640607] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:50.466 [2024-05-13 05:55:58.640651] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:50.466 00:03:50.466 05:55:58 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:03:50.466 05:55:58 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:03:50.726 Malloc3 00:03:50.726 05:55:58 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:50.726 05:55:58 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:03:50.726 [2024-05-13 05:55:58.992618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:50.726 [2024-05-13 05:55:58.992675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.726 [2024-05-13 05:55:58.992695] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c421f00 00:03:50.726 [2024-05-13 05:55:58.992701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.726 [2024-05-13 05:55:58.993123] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.726 [2024-05-13 05:55:58.993149] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:50.726 PTBdevFromMalloc3 00:03:50.726 05:55:59 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:03:50.726 05:55:59 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:03:50.986 Null0 00:03:50.986 05:55:59 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:03:50.986 05:55:59 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:03:51.245 Malloc0 00:03:51.245 05:55:59 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:03:51.245 05:55:59 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:03:51.245 Malloc1 00:03:51.246 05:55:59 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:03:51.246 05:55:59 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:03:51.814 102400+0 records in 00:03:51.814 102400+0 records out 00:03:51.814 104857600 bytes transferred in 0.465005 secs (225497912 bytes/sec) 00:03:51.814 05:55:59 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:03:51.814 05:55:59 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:03:52.074 aio_disk 00:03:52.074 05:56:00 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:03:52.074 05:56:00 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:52.074 05:56:00 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:03:52.333 7986831c-10ed-11ef-ba60-3508ead7bdda 00:03:52.333 05:56:00 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:03:52.333 05:56:00 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:03:52.333 05:56:00 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:03:52.333 05:56:00 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:03:52.333 05:56:00 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:03:52.593 05:56:00 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:52.593 05:56:00 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:03:52.852 05:56:00 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:52.852 05:56:00 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:03:52.852 05:56:01 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:03:52.852 05:56:01 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:03:52.852 05:56:01 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:79a296af-10ed-11ef-ba60-3508ead7bdda bdev_register:79bf46b1-10ed-11ef-ba60-3508ead7bdda bdev_register:79db5a61-10ed-11ef-ba60-3508ead7bdda bdev_register:79f8a6d0-10ed-11ef-ba60-3508ead7bdda 00:03:52.852 05:56:01 -- json_config/json_config.sh@70 -- # local events_to_check 00:03:52.852 05:56:01 -- json_config/json_config.sh@71 -- # local recorded_events 00:03:52.852 05:56:01 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:03:52.852 05:56:01 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:79a296af-10ed-11ef-ba60-3508ead7bdda bdev_register:79bf46b1-10ed-11ef-ba60-3508ead7bdda bdev_register:79db5a61-10ed-11ef-ba60-3508ead7bdda bdev_register:79f8a6d0-10ed-11ef-ba60-3508ead7bdda 00:03:52.852 05:56:01 -- json_config/json_config.sh@74 -- # sort 00:03:53.112 05:56:01 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:03:53.112 05:56:01 -- json_config/json_config.sh@75 -- # sort 00:03:53.112 05:56:01 -- json_config/json_config.sh@75 -- # get_notifications 00:03:53.112 05:56:01 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:03:53.112 05:56:01 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:03:53.112 05:56:01 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:79a296af-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:79bf46b1-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:79db5a61-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@65 -- # echo bdev_register:79f8a6d0-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # IFS=: 00:03:53.112 05:56:01 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:03:53.112 05:56:01 -- json_config/json_config.sh@77 -- # [[ bdev_register:79a296af-10ed-11ef-ba60-3508ead7bdda bdev_register:79bf46b1-10ed-11ef-ba60-3508ead7bdda bdev_register:79db5a61-10ed-11ef-ba60-3508ead7bdda bdev_register:79f8a6d0-10ed-11ef-ba60-3508ead7bdda bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\9\a\2\9\6\a\f\-\1\0\e\d\-\1\1\e\f\-\b\a\6\0\-\3\5\0\8\e\a\d\7\b\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\9\b\f\4\6\b\1\-\1\0\e\d\-\1\1\e\f\-\b\a\6\0\-\3\5\0\8\e\a\d\7\b\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\9\d\b\5\a\6\1\-\1\0\e\d\-\1\1\e\f\-\b\a\6\0\-\3\5\0\8\e\a\d\7\b\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\9\f\8\a\6\d\0\-\1\0\e\d\-\1\1\e\f\-\b\a\6\0\-\3\5\0\8\e\a\d\7\b\d\d\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:03:53.112 05:56:01 -- json_config/json_config.sh@89 -- # cat 00:03:53.112 05:56:01 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:79a296af-10ed-11ef-ba60-3508ead7bdda bdev_register:79bf46b1-10ed-11ef-ba60-3508ead7bdda bdev_register:79db5a61-10ed-11ef-ba60-3508ead7bdda bdev_register:79f8a6d0-10ed-11ef-ba60-3508ead7bdda bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:03:53.112 Expected events matched: 00:03:53.112 bdev_register:79a296af-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 bdev_register:79bf46b1-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 bdev_register:79db5a61-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 bdev_register:79f8a6d0-10ed-11ef-ba60-3508ead7bdda 00:03:53.112 bdev_register:Malloc0 00:03:53.112 bdev_register:Malloc0p0 00:03:53.112 bdev_register:Malloc0p1 00:03:53.112 bdev_register:Malloc0p2 00:03:53.112 bdev_register:Malloc1 00:03:53.112 bdev_register:Malloc3 00:03:53.112 bdev_register:Null0 00:03:53.112 bdev_register:Nvme0n1 00:03:53.112 bdev_register:Nvme0n1p0 00:03:53.112 bdev_register:Nvme0n1p1 00:03:53.112 bdev_register:PTBdevFromMalloc3 00:03:53.112 bdev_register:aio_disk 00:03:53.112 05:56:01 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:03:53.112 05:56:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.112 05:56:01 -- common/autotest_common.sh@10 -- # set +x 00:03:53.112 05:56:01 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:53.112 05:56:01 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:53.112 05:56:01 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:03:53.112 05:56:01 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:53.112 05:56:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.112 05:56:01 -- common/autotest_common.sh@10 -- # set +x 00:03:53.372 05:56:01 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:53.372 05:56:01 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.372 05:56:01 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:53.372 MallocBdevForConfigChangeCheck 00:03:53.372 05:56:01 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:53.372 05:56:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:53.372 05:56:01 -- common/autotest_common.sh@10 -- # set +x 00:03:53.372 05:56:01 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:53.372 05:56:01 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.940 INFO: shutting down applications... 00:03:53.940 05:56:01 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:53.940 05:56:01 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:53.940 05:56:01 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:53.940 05:56:01 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:53.940 05:56:01 -- json_config/json_config.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:53.940 [2024-05-13 05:56:02.076764] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:03:53.940 Calling clear_iscsi_subsystem 00:03:53.940 Calling clear_nvmf_subsystem 00:03:53.940 Calling clear_bdev_subsystem 00:03:53.940 Calling clear_accel_subsystem 00:03:53.940 Calling clear_sock_subsystem 00:03:53.940 Calling clear_scheduler_subsystem 00:03:53.940 Calling clear_iobuf_subsystem 00:03:53.940 Calling clear_vmd_subsystem 00:03:53.940 05:56:02 -- json_config/json_config.sh@390 -- # local config_filter=/usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:03:53.940 05:56:02 -- json_config/json_config.sh@396 -- # count=100 00:03:53.940 05:56:02 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:53.940 05:56:02 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.940 05:56:02 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:53.940 05:56:02 -- json_config/json_config.sh@398 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:03:54.515 05:56:02 -- json_config/json_config.sh@398 -- # break 00:03:54.515 05:56:02 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:54.515 05:56:02 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:54.515 05:56:02 -- json_config/json_config.sh@120 -- # local app=target 00:03:54.515 05:56:02 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:54.515 05:56:02 -- json_config/json_config.sh@124 -- # [[ -n 45411 ]] 00:03:54.515 05:56:02 -- json_config/json_config.sh@127 -- # kill -SIGINT 45411 00:03:54.515 05:56:02 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:54.515 05:56:02 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:54.515 05:56:02 -- json_config/json_config.sh@130 -- # kill -0 45411 00:03:54.515 05:56:02 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:54.774 05:56:03 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:54.774 05:56:03 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:54.774 05:56:03 -- json_config/json_config.sh@130 -- # kill -0 45411 00:03:54.774 05:56:03 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:54.774 05:56:03 -- json_config/json_config.sh@132 -- # break 00:03:54.774 05:56:03 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:54.774 SPDK target shutdown done 00:03:54.774 05:56:03 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:54.774 INFO: relaunching applications... 00:03:54.774 05:56:03 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:54.774 05:56:03 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:54.774 05:56:03 -- json_config/json_config.sh@98 -- # local app=target 00:03:54.774 05:56:03 -- json_config/json_config.sh@99 -- # shift 00:03:54.774 05:56:03 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:54.774 05:56:03 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:54.774 05:56:03 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:54.774 05:56:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:54.774 05:56:03 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:55.033 05:56:03 -- json_config/json_config.sh@111 -- # app_pid[$app]=45569 00:03:55.033 Waiting for target to run... 00:03:55.033 05:56:03 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:55.033 05:56:03 -- json_config/json_config.sh@110 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.033 05:56:03 -- json_config/json_config.sh@114 -- # waitforlisten 45569 /var/tmp/spdk_tgt.sock 00:03:55.033 05:56:03 -- common/autotest_common.sh@819 -- # '[' -z 45569 ']' 00:03:55.033 05:56:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:55.034 05:56:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:55.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:55.034 05:56:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:55.034 05:56:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:55.034 05:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:55.034 [2024-05-13 05:56:03.096287] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:55.034 [2024-05-13 05:56:03.096643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:55.034 EAL: TSC is not safe to use in SMP mode 00:03:55.034 EAL: TSC is not invariant 00:03:55.034 [2024-05-13 05:56:03.312230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.292 [2024-05-13 05:56:03.398513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:55.292 [2024-05-13 05:56:03.398592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.292 [2024-05-13 05:56:03.526697] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:55.292 [2024-05-13 05:56:03.526732] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:03:55.292 [2024-05-13 05:56:03.534695] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:55.292 [2024-05-13 05:56:03.534716] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:03:55.292 [2024-05-13 05:56:03.542704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:55.292 [2024-05-13 05:56:03.542725] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:03:55.292 [2024-05-13 05:56:03.542731] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:03:55.292 [2024-05-13 05:56:03.550701] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:55.551 [2024-05-13 05:56:03.619508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:03:55.551 [2024-05-13 05:56:03.619540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:55.551 [2024-05-13 05:56:03.619554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82dbb1500 00:03:55.551 [2024-05-13 05:56:03.619559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:55.551 [2024-05-13 05:56:03.619619] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:55.551 [2024-05-13 05:56:03.619625] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:03:55.809 05:56:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:55.809 05:56:03 -- common/autotest_common.sh@852 -- # return 0 00:03:55.809 00:03:55.809 05:56:03 -- json_config/json_config.sh@115 -- # echo '' 00:03:55.809 05:56:03 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:55.809 INFO: Checking if target configuration is the same... 00:03:55.809 05:56:03 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.809 05:56:03 -- json_config/json_config.sh@441 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.oNBRBp /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.809 + '[' 2 -ne 2 ']' 00:03:55.809 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:55.809 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:55.809 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:55.809 +++ basename /tmp//sh-np.oNBRBp 00:03:55.809 ++ mktemp /tmp/sh-np.oNBRBp.XXX 00:03:55.809 + tmp_file_1=/tmp/sh-np.oNBRBp.VKT 00:03:55.809 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:55.809 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.809 + tmp_file_2=/tmp/spdk_tgt_config.json.DwF 00:03:55.809 + ret=0 00:03:55.809 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:55.809 05:56:04 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:55.809 05:56:04 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.068 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.068 + diff -u /tmp/sh-np.oNBRBp.VKT /tmp/spdk_tgt_config.json.DwF 00:03:56.068 INFO: JSON config files are the same 00:03:56.068 + echo 'INFO: JSON config files are the same' 00:03:56.068 + rm /tmp/sh-np.oNBRBp.VKT /tmp/spdk_tgt_config.json.DwF 00:03:56.068 + exit 0 00:03:56.068 05:56:04 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:56.068 INFO: changing configuration and checking if this can be detected... 00:03:56.068 05:56:04 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.068 05:56:04 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.068 05:56:04 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.327 05:56:04 -- json_config/json_config.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.yONKNW /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.327 + '[' 2 -ne 2 ']' 00:03:56.327 +++ dirname /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:03:56.327 ++ readlink -f /usr/home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:03:56.327 + rootdir=/usr/home/vagrant/spdk_repo/spdk 00:03:56.327 +++ basename /tmp//sh-np.yONKNW 00:03:56.327 ++ mktemp /tmp/sh-np.yONKNW.XXX 00:03:56.327 + tmp_file_1=/tmp/sh-np.yONKNW.pKJ 00:03:56.327 +++ basename /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:56.327 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.327 + tmp_file_2=/tmp/spdk_tgt_config.json.WcD 00:03:56.327 + ret=0 00:03:56.327 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.327 05:56:04 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:56.327 05:56:04 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.587 + /usr/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:03:56.587 + diff -u /tmp/sh-np.yONKNW.pKJ /tmp/spdk_tgt_config.json.WcD 00:03:56.587 + ret=1 00:03:56.587 + echo '=== Start of file: /tmp/sh-np.yONKNW.pKJ ===' 00:03:56.587 + cat /tmp/sh-np.yONKNW.pKJ 00:03:56.587 + echo '=== End of file: /tmp/sh-np.yONKNW.pKJ ===' 00:03:56.587 + echo '' 00:03:56.587 + echo '=== Start of file: /tmp/spdk_tgt_config.json.WcD ===' 00:03:56.587 + cat /tmp/spdk_tgt_config.json.WcD 00:03:56.588 + echo '=== End of file: /tmp/spdk_tgt_config.json.WcD ===' 00:03:56.588 + echo '' 00:03:56.588 + rm /tmp/sh-np.yONKNW.pKJ /tmp/spdk_tgt_config.json.WcD 00:03:56.588 + exit 1 00:03:56.588 INFO: configuration change detected. 00:03:56.588 05:56:04 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:56.588 05:56:04 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:56.588 05:56:04 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:56.588 05:56:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:56.588 05:56:04 -- common/autotest_common.sh@10 -- # set +x 00:03:56.588 05:56:04 -- json_config/json_config.sh@360 -- # local ret=0 00:03:56.588 05:56:04 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:56.588 05:56:04 -- json_config/json_config.sh@370 -- # [[ -n 45569 ]] 00:03:56.588 05:56:04 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:56.588 05:56:04 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.588 05:56:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:56.588 05:56:04 -- common/autotest_common.sh@10 -- # set +x 00:03:56.588 05:56:04 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:03:56.588 05:56:04 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:03:56.588 05:56:04 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:03:56.847 05:56:05 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:03:56.847 05:56:05 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:03:57.107 05:56:05 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:03:57.107 05:56:05 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:03:57.107 05:56:05 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:03:57.107 05:56:05 -- json_config/json_config.sh@36 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:03:57.367 05:56:05 -- json_config/json_config.sh@246 -- # uname -s 00:03:57.367 05:56:05 -- json_config/json_config.sh@246 -- # [[ FreeBSD = Linux ]] 00:03:57.367 05:56:05 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:57.367 05:56:05 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:57.367 05:56:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:57.367 05:56:05 -- common/autotest_common.sh@10 -- # set +x 00:03:57.367 05:56:05 -- json_config/json_config.sh@376 -- # killprocess 45569 00:03:57.367 05:56:05 -- common/autotest_common.sh@926 -- # '[' -z 45569 ']' 00:03:57.367 05:56:05 -- common/autotest_common.sh@930 -- # kill -0 45569 00:03:57.367 05:56:05 -- common/autotest_common.sh@931 -- # uname 00:03:57.367 05:56:05 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:03:57.367 05:56:05 -- common/autotest_common.sh@934 -- # ps -c -o command 45569 00:03:57.367 05:56:05 -- common/autotest_common.sh@934 -- # tail -1 00:03:57.367 05:56:05 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:03:57.367 05:56:05 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:03:57.367 killing process with pid 45569 00:03:57.367 05:56:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45569' 00:03:57.367 05:56:05 -- common/autotest_common.sh@945 -- # kill 45569 00:03:57.367 05:56:05 -- common/autotest_common.sh@950 -- # wait 45569 00:03:57.626 05:56:05 -- json_config/json_config.sh@379 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /usr/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:03:57.626 05:56:05 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:57.626 05:56:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:57.626 05:56:05 -- common/autotest_common.sh@10 -- # set +x 00:03:57.626 05:56:05 -- json_config/json_config.sh@381 -- # return 0 00:03:57.626 INFO: Success 00:03:57.626 05:56:05 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:57.626 00:03:57.626 real 0m9.441s 00:03:57.626 user 0m14.131s 00:03:57.626 sys 0m1.785s 00:03:57.626 05:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.626 05:56:05 -- common/autotest_common.sh@10 -- # set +x 00:03:57.626 ************************************ 00:03:57.626 END TEST json_config 00:03:57.626 ************************************ 00:03:57.895 05:56:05 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.895 05:56:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:57.895 05:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:57.895 05:56:05 -- common/autotest_common.sh@10 -- # set +x 00:03:57.895 ************************************ 00:03:57.895 START TEST json_config_extra_key 00:03:57.895 ************************************ 00:03:57.895 05:56:05 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:57.895 05:56:06 -- nvmf/common.sh@7 -- # uname -s 00:03:57.895 05:56:06 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:57.895 05:56:06 -- nvmf/common.sh@7 -- # return 0 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.895 INFO: launching applications... 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /usr/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=45685 00:03:57.895 Waiting for target to run... 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:57.895 05:56:06 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 45685 /var/tmp/spdk_tgt.sock 00:03:57.895 05:56:06 -- common/autotest_common.sh@819 -- # '[' -z 45685 ']' 00:03:57.895 05:56:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.895 05:56:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:57.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.895 05:56:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.895 05:56:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:57.895 05:56:06 -- common/autotest_common.sh@10 -- # set +x 00:03:57.895 [2024-05-13 05:56:06.113856] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:57.895 [2024-05-13 05:56:06.114036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:03:58.168 EAL: TSC is not safe to use in SMP mode 00:03:58.168 EAL: TSC is not invariant 00:03:58.168 [2024-05-13 05:56:06.325428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.168 [2024-05-13 05:56:06.403464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:58.168 [2024-05-13 05:56:06.403545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.737 05:56:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:58.737 05:56:07 -- common/autotest_common.sh@852 -- # return 0 00:03:58.737 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:58.737 INFO: shutting down applications... 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 45685 ]] 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 45685 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 45685 00:03:58.737 05:56:07 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 45685 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@52 -- # break 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:03:59.305 SPDK target shutdown done 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:03:59.305 Success 00:03:59.305 05:56:07 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:03:59.305 00:03:59.305 real 0m1.606s 00:03:59.305 user 0m1.290s 00:03:59.305 sys 0m0.400s 00:03:59.305 05:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.305 05:56:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.305 ************************************ 00:03:59.306 END TEST json_config_extra_key 00:03:59.306 ************************************ 00:03:59.306 05:56:07 -- spdk/autotest.sh@180 -- # run_test alias_rpc /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.306 05:56:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.306 05:56:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.306 05:56:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.306 ************************************ 00:03:59.306 START TEST alias_rpc 00:03:59.306 ************************************ 00:03:59.306 05:56:07 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:59.566 * Looking for test storage... 00:03:59.566 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:03:59.566 05:56:07 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:59.566 05:56:07 -- alias_rpc/alias_rpc.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:03:59.567 05:56:07 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=45734 00:03:59.567 05:56:07 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 45734 00:03:59.567 05:56:07 -- common/autotest_common.sh@819 -- # '[' -z 45734 ']' 00:03:59.567 05:56:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.567 05:56:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:59.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.567 05:56:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.567 05:56:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:59.567 05:56:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.567 [2024-05-13 05:56:07.803923] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:03:59.567 [2024-05-13 05:56:07.804141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:00.136 EAL: TSC is not safe to use in SMP mode 00:04:00.136 EAL: TSC is not invariant 00:04:00.136 [2024-05-13 05:56:08.230350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.136 [2024-05-13 05:56:08.319965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:00.136 [2024-05-13 05:56:08.320051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.705 05:56:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:00.705 05:56:08 -- common/autotest_common.sh@852 -- # return 0 00:04:00.705 05:56:08 -- alias_rpc/alias_rpc.sh@17 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:00.705 05:56:08 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 45734 00:04:00.705 05:56:08 -- common/autotest_common.sh@926 -- # '[' -z 45734 ']' 00:04:00.705 05:56:08 -- common/autotest_common.sh@930 -- # kill -0 45734 00:04:00.705 05:56:08 -- common/autotest_common.sh@931 -- # uname 00:04:00.705 05:56:08 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:00.705 05:56:08 -- common/autotest_common.sh@934 -- # ps -c -o command 45734 00:04:00.705 05:56:08 -- common/autotest_common.sh@934 -- # tail -1 00:04:00.705 05:56:08 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:00.705 05:56:08 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:00.705 killing process with pid 45734 00:04:00.705 05:56:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45734' 00:04:00.705 05:56:08 -- common/autotest_common.sh@945 -- # kill 45734 00:04:00.705 05:56:08 -- common/autotest_common.sh@950 -- # wait 45734 00:04:00.965 00:04:00.965 real 0m1.551s 00:04:00.965 user 0m1.528s 00:04:00.965 sys 0m0.666s 00:04:00.965 05:56:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.965 05:56:09 -- common/autotest_common.sh@10 -- # set +x 00:04:00.965 ************************************ 00:04:00.965 END TEST alias_rpc 00:04:00.965 ************************************ 00:04:00.965 05:56:09 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:00.965 05:56:09 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:00.965 05:56:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.965 05:56:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.965 05:56:09 -- common/autotest_common.sh@10 -- # set +x 00:04:00.965 ************************************ 00:04:00.965 START TEST spdkcli_tcp 00:04:00.965 ************************************ 00:04:00.965 05:56:09 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:01.225 * Looking for test storage... 00:04:01.225 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:01.225 05:56:09 -- spdkcli/common.sh@6 -- # spdkcli_job=/usr/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:01.225 05:56:09 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/usr/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.225 05:56:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:01.225 05:56:09 -- common/autotest_common.sh@10 -- # set +x 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=45790 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.225 05:56:09 -- spdkcli/tcp.sh@27 -- # waitforlisten 45790 00:04:01.225 05:56:09 -- common/autotest_common.sh@819 -- # '[' -z 45790 ']' 00:04:01.225 05:56:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.225 05:56:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:01.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.225 05:56:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.225 05:56:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:01.225 05:56:09 -- common/autotest_common.sh@10 -- # set +x 00:04:01.225 [2024-05-13 05:56:09.420943] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:01.225 [2024-05-13 05:56:09.421293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:01.795 EAL: TSC is not safe to use in SMP mode 00:04:01.795 EAL: TSC is not invariant 00:04:01.795 [2024-05-13 05:56:09.844969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.795 [2024-05-13 05:56:09.931161] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:01.795 [2024-05-13 05:56:09.931339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.795 [2024-05-13 05:56:09.931341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.054 05:56:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:02.054 05:56:10 -- common/autotest_common.sh@852 -- # return 0 00:04:02.054 05:56:10 -- spdkcli/tcp.sh@31 -- # socat_pid=45794 00:04:02.054 05:56:10 -- spdkcli/tcp.sh@33 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:02.054 05:56:10 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:02.314 [ 00:04:02.314 "spdk_get_version", 00:04:02.314 "rpc_get_methods", 00:04:02.314 "env_dpdk_get_mem_stats", 00:04:02.314 "trace_get_info", 00:04:02.314 "trace_get_tpoint_group_mask", 00:04:02.314 "trace_disable_tpoint_group", 00:04:02.314 "trace_enable_tpoint_group", 00:04:02.314 "trace_clear_tpoint_mask", 00:04:02.314 "trace_set_tpoint_mask", 00:04:02.314 "notify_get_notifications", 00:04:02.314 "notify_get_types", 00:04:02.314 "accel_get_stats", 00:04:02.314 "accel_set_options", 00:04:02.314 "accel_set_driver", 00:04:02.314 "accel_crypto_key_destroy", 00:04:02.314 "accel_crypto_keys_get", 00:04:02.314 "accel_crypto_key_create", 00:04:02.314 "accel_assign_opc", 00:04:02.314 "accel_get_module_info", 00:04:02.314 "accel_get_opc_assignments", 00:04:02.314 "bdev_get_histogram", 00:04:02.314 "bdev_enable_histogram", 00:04:02.314 "bdev_set_qos_limit", 00:04:02.314 "bdev_set_qd_sampling_period", 00:04:02.314 "bdev_get_bdevs", 00:04:02.314 "bdev_reset_iostat", 00:04:02.314 "bdev_get_iostat", 00:04:02.314 "bdev_examine", 00:04:02.314 "bdev_wait_for_examine", 00:04:02.314 "bdev_set_options", 00:04:02.314 "sock_set_default_impl", 00:04:02.314 "sock_impl_set_options", 00:04:02.314 "sock_impl_get_options", 00:04:02.314 "framework_get_pci_devices", 00:04:02.314 "framework_get_config", 00:04:02.314 "framework_get_subsystems", 00:04:02.314 "thread_set_cpumask", 00:04:02.314 "framework_get_scheduler", 00:04:02.314 "framework_set_scheduler", 00:04:02.314 "framework_get_reactors", 00:04:02.314 "thread_get_io_channels", 00:04:02.314 "thread_get_pollers", 00:04:02.314 "thread_get_stats", 00:04:02.314 "framework_monitor_context_switch", 00:04:02.314 "spdk_kill_instance", 00:04:02.314 "log_enable_timestamps", 00:04:02.314 "log_get_flags", 00:04:02.314 "log_clear_flag", 00:04:02.314 "log_set_flag", 00:04:02.314 "log_get_level", 00:04:02.314 "log_set_level", 00:04:02.314 "log_get_print_level", 00:04:02.314 "log_set_print_level", 00:04:02.314 "framework_enable_cpumask_locks", 00:04:02.314 "framework_disable_cpumask_locks", 00:04:02.314 "framework_wait_init", 00:04:02.314 "framework_start_init", 00:04:02.314 "iobuf_get_stats", 00:04:02.314 "iobuf_set_options", 00:04:02.314 "vmd_rescan", 00:04:02.314 "vmd_remove_device", 00:04:02.314 "vmd_enable", 00:04:02.314 "nvmf_subsystem_get_listeners", 00:04:02.314 "nvmf_subsystem_get_qpairs", 00:04:02.314 "nvmf_subsystem_get_controllers", 00:04:02.314 "nvmf_get_stats", 00:04:02.314 "nvmf_get_transports", 00:04:02.314 "nvmf_create_transport", 00:04:02.314 "nvmf_get_targets", 00:04:02.314 "nvmf_delete_target", 00:04:02.314 "nvmf_create_target", 00:04:02.314 "nvmf_subsystem_allow_any_host", 00:04:02.314 "nvmf_subsystem_remove_host", 00:04:02.314 "nvmf_subsystem_add_host", 00:04:02.314 "nvmf_subsystem_remove_ns", 00:04:02.314 "nvmf_subsystem_add_ns", 00:04:02.314 "nvmf_subsystem_listener_set_ana_state", 00:04:02.314 "nvmf_discovery_get_referrals", 00:04:02.314 "nvmf_discovery_remove_referral", 00:04:02.315 "nvmf_discovery_add_referral", 00:04:02.315 "nvmf_subsystem_remove_listener", 00:04:02.315 "nvmf_subsystem_add_listener", 00:04:02.315 "nvmf_delete_subsystem", 00:04:02.315 "nvmf_create_subsystem", 00:04:02.315 "nvmf_get_subsystems", 00:04:02.315 "nvmf_set_crdt", 00:04:02.315 "nvmf_set_config", 00:04:02.315 "nvmf_set_max_subsystems", 00:04:02.315 "scsi_get_devices", 00:04:02.315 "iscsi_set_options", 00:04:02.315 "iscsi_get_auth_groups", 00:04:02.315 "iscsi_auth_group_remove_secret", 00:04:02.315 "iscsi_auth_group_add_secret", 00:04:02.315 "iscsi_delete_auth_group", 00:04:02.315 "iscsi_create_auth_group", 00:04:02.315 "iscsi_set_discovery_auth", 00:04:02.315 "iscsi_get_options", 00:04:02.315 "iscsi_target_node_request_logout", 00:04:02.315 "iscsi_target_node_set_redirect", 00:04:02.315 "iscsi_target_node_set_auth", 00:04:02.315 "iscsi_target_node_add_lun", 00:04:02.315 "iscsi_get_connections", 00:04:02.315 "iscsi_portal_group_set_auth", 00:04:02.315 "iscsi_start_portal_group", 00:04:02.315 "iscsi_delete_portal_group", 00:04:02.315 "iscsi_create_portal_group", 00:04:02.315 "iscsi_get_portal_groups", 00:04:02.315 "iscsi_delete_target_node", 00:04:02.315 "iscsi_target_node_remove_pg_ig_maps", 00:04:02.315 "iscsi_target_node_add_pg_ig_maps", 00:04:02.315 "iscsi_create_target_node", 00:04:02.315 "iscsi_get_target_nodes", 00:04:02.315 "iscsi_delete_initiator_group", 00:04:02.315 "iscsi_initiator_group_remove_initiators", 00:04:02.315 "iscsi_initiator_group_add_initiators", 00:04:02.315 "iscsi_create_initiator_group", 00:04:02.315 "iscsi_get_initiator_groups", 00:04:02.315 "iaa_scan_accel_module", 00:04:02.315 "dsa_scan_accel_module", 00:04:02.315 "ioat_scan_accel_module", 00:04:02.315 "accel_error_inject_error", 00:04:02.315 "bdev_aio_delete", 00:04:02.315 "bdev_aio_rescan", 00:04:02.315 "bdev_aio_create", 00:04:02.315 "blobfs_create", 00:04:02.315 "blobfs_detect", 00:04:02.315 "blobfs_set_cache_size", 00:04:02.315 "bdev_zone_block_delete", 00:04:02.315 "bdev_zone_block_create", 00:04:02.315 "bdev_delay_delete", 00:04:02.315 "bdev_delay_create", 00:04:02.315 "bdev_delay_update_latency", 00:04:02.315 "bdev_split_delete", 00:04:02.315 "bdev_split_create", 00:04:02.315 "bdev_error_inject_error", 00:04:02.315 "bdev_error_delete", 00:04:02.315 "bdev_error_create", 00:04:02.315 "bdev_raid_set_options", 00:04:02.315 "bdev_raid_remove_base_bdev", 00:04:02.315 "bdev_raid_add_base_bdev", 00:04:02.315 "bdev_raid_delete", 00:04:02.315 "bdev_raid_create", 00:04:02.315 "bdev_raid_get_bdevs", 00:04:02.315 "bdev_lvol_grow_lvstore", 00:04:02.315 "bdev_lvol_get_lvols", 00:04:02.315 "bdev_lvol_get_lvstores", 00:04:02.315 "bdev_lvol_delete", 00:04:02.315 "bdev_lvol_set_read_only", 00:04:02.315 "bdev_lvol_resize", 00:04:02.315 "bdev_lvol_decouple_parent", 00:04:02.315 "bdev_lvol_inflate", 00:04:02.315 "bdev_lvol_rename", 00:04:02.315 "bdev_lvol_clone_bdev", 00:04:02.315 "bdev_lvol_clone", 00:04:02.315 "bdev_lvol_snapshot", 00:04:02.315 "bdev_lvol_create", 00:04:02.315 "bdev_lvol_delete_lvstore", 00:04:02.315 "bdev_lvol_rename_lvstore", 00:04:02.315 "bdev_lvol_create_lvstore", 00:04:02.315 "bdev_passthru_delete", 00:04:02.315 "bdev_passthru_create", 00:04:02.315 "bdev_nvme_send_cmd", 00:04:02.315 "bdev_nvme_get_path_iostat", 00:04:02.315 "bdev_nvme_get_mdns_discovery_info", 00:04:02.315 "bdev_nvme_stop_mdns_discovery", 00:04:02.315 "bdev_nvme_start_mdns_discovery", 00:04:02.315 "bdev_nvme_set_multipath_policy", 00:04:02.315 "bdev_nvme_set_preferred_path", 00:04:02.315 "bdev_nvme_get_io_paths", 00:04:02.315 "bdev_nvme_remove_error_injection", 00:04:02.315 "bdev_nvme_add_error_injection", 00:04:02.315 "bdev_nvme_get_discovery_info", 00:04:02.315 "bdev_nvme_stop_discovery", 00:04:02.315 "bdev_nvme_start_discovery", 00:04:02.315 "bdev_nvme_get_controller_health_info", 00:04:02.315 "bdev_nvme_disable_controller", 00:04:02.315 "bdev_nvme_enable_controller", 00:04:02.315 "bdev_nvme_reset_controller", 00:04:02.315 "bdev_nvme_get_transport_statistics", 00:04:02.315 "bdev_nvme_apply_firmware", 00:04:02.315 "bdev_nvme_detach_controller", 00:04:02.315 "bdev_nvme_get_controllers", 00:04:02.315 "bdev_nvme_attach_controller", 00:04:02.315 "bdev_nvme_set_hotplug", 00:04:02.315 "bdev_nvme_set_options", 00:04:02.315 "bdev_null_resize", 00:04:02.315 "bdev_null_delete", 00:04:02.315 "bdev_null_create", 00:04:02.315 "bdev_malloc_delete", 00:04:02.315 "bdev_malloc_create" 00:04:02.315 ] 00:04:02.315 05:56:10 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:02.315 05:56:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:02.315 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:02.315 05:56:10 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:02.315 05:56:10 -- spdkcli/tcp.sh@38 -- # killprocess 45790 00:04:02.315 05:56:10 -- common/autotest_common.sh@926 -- # '[' -z 45790 ']' 00:04:02.315 05:56:10 -- common/autotest_common.sh@930 -- # kill -0 45790 00:04:02.315 05:56:10 -- common/autotest_common.sh@931 -- # uname 00:04:02.315 05:56:10 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:02.315 05:56:10 -- common/autotest_common.sh@934 -- # ps -c -o command 45790 00:04:02.315 05:56:10 -- common/autotest_common.sh@934 -- # tail -1 00:04:02.315 05:56:10 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:02.315 05:56:10 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:02.315 killing process with pid 45790 00:04:02.315 05:56:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45790' 00:04:02.315 05:56:10 -- common/autotest_common.sh@945 -- # kill 45790 00:04:02.315 05:56:10 -- common/autotest_common.sh@950 -- # wait 45790 00:04:02.575 00:04:02.575 real 0m1.523s 00:04:02.575 user 0m2.232s 00:04:02.575 sys 0m0.688s 00:04:02.575 05:56:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.575 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:02.575 ************************************ 00:04:02.575 END TEST spdkcli_tcp 00:04:02.575 ************************************ 00:04:02.575 05:56:10 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.575 05:56:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.575 05:56:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.575 05:56:10 -- common/autotest_common.sh@10 -- # set +x 00:04:02.575 ************************************ 00:04:02.575 START TEST dpdk_mem_utility 00:04:02.575 ************************************ 00:04:02.575 05:56:10 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:02.836 * Looking for test storage... 00:04:02.836 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:02.836 05:56:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:02.836 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=45860 00:04:02.836 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 45860 00:04:02.836 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:02.836 05:56:11 -- common/autotest_common.sh@819 -- # '[' -z 45860 ']' 00:04:02.836 05:56:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.836 05:56:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:02.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.836 05:56:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.836 05:56:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:02.836 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:04:02.836 [2024-05-13 05:56:11.013693] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:02.836 [2024-05-13 05:56:11.013924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:03.405 EAL: TSC is not safe to use in SMP mode 00:04:03.405 EAL: TSC is not invariant 00:04:03.406 [2024-05-13 05:56:11.453614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.406 [2024-05-13 05:56:11.541377] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.406 [2024-05-13 05:56:11.541463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.665 05:56:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:03.665 05:56:11 -- common/autotest_common.sh@852 -- # return 0 00:04:03.665 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:03.665 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:03.665 05:56:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:03.665 05:56:11 -- common/autotest_common.sh@10 -- # set +x 00:04:03.665 { 00:04:03.665 "filename": "/tmp/spdk_mem_dump.txt" 00:04:03.665 } 00:04:03.665 05:56:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:03.665 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:03.665 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:03.665 1 heaps totaling size 2048.000000 MiB 00:04:03.665 size: 2048.000000 MiB heap id: 0 00:04:03.665 end heaps---------- 00:04:03.665 8 mempools totaling size 592.563660 MiB 00:04:03.665 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:03.665 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:03.665 size: 84.500549 MiB name: bdev_io_45860 00:04:03.665 size: 51.008362 MiB name: evtpool_45860 00:04:03.665 size: 50.000549 MiB name: msgpool_45860 00:04:03.665 size: 21.758911 MiB name: PDU_Pool 00:04:03.665 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:03.665 size: 0.026123 MiB name: Session_Pool 00:04:03.665 end mempools------- 00:04:03.665 6 memzones totaling size 4.142822 MiB 00:04:03.665 size: 1.000366 MiB name: RG_ring_0_45860 00:04:03.665 size: 1.000366 MiB name: RG_ring_1_45860 00:04:03.665 size: 1.000366 MiB name: RG_ring_4_45860 00:04:03.665 size: 1.000366 MiB name: RG_ring_5_45860 00:04:03.665 size: 0.125366 MiB name: RG_ring_2_45860 00:04:03.665 size: 0.015991 MiB name: RG_ring_3_45860 00:04:03.665 end memzones------- 00:04:03.665 05:56:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:03.925 heap id: 0 total size: 2048.000000 MiB number of busy elements: 41 number of free elements: 3 00:04:03.925 list of free elements. size: 1254.071533 MiB 00:04:03.925 element at address: 0x1060000000 with size: 1254.001099 MiB 00:04:03.925 element at address: 0x10c8000000 with size: 0.070129 MiB 00:04:03.925 element at address: 0x10d98b6000 with size: 0.000305 MiB 00:04:03.925 list of standard malloc elements. size: 197.218323 MiB 00:04:03.925 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:04:03.925 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:04:03.925 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:04:03.925 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:04:03.925 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:04:03.925 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:04:03.925 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:04:03.925 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:04:03.925 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6140 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6200 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6700 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b67c0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:04:03.925 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:04:03.925 list of memzone associated elements. size: 596.710144 MiB 00:04:03.925 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:04:03.925 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:03.925 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:04:03.925 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:03.925 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:04:03.925 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_45860_0 00:04:03.925 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:04:03.925 associated memzone info: size: 48.000000 MiB name: MP_evtpool_45860_0 00:04:03.925 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:04:03.925 associated memzone info: size: 48.000000 MiB name: MP_msgpool_45860_0 00:04:03.925 element at address: 0x10c683d780 with size: 20.250671 MiB 00:04:03.925 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:03.925 element at address: 0x10ae700680 with size: 18.000671 MiB 00:04:03.925 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:03.925 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:04:03.925 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_45860 00:04:03.925 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:04:03.925 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_45860 00:04:03.925 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:04:03.925 associated memzone info: size: 1.007996 MiB name: MP_evtpool_45860 00:04:03.925 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:04:03.925 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:03.925 element at address: 0x10c673b640 with size: 1.008118 MiB 00:04:03.925 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:03.925 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:04:03.925 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:03.925 element at address: 0x10af980b40 with size: 1.008118 MiB 00:04:03.925 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:03.925 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:04:03.925 associated memzone info: size: 1.000366 MiB name: RG_ring_0_45860 00:04:03.925 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:04:03.925 associated memzone info: size: 1.000366 MiB name: RG_ring_1_45860 00:04:03.925 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:04:03.925 associated memzone info: size: 1.000366 MiB name: RG_ring_4_45860 00:04:03.925 element at address: 0x10ae600480 with size: 1.000488 MiB 00:04:03.925 associated memzone info: size: 1.000366 MiB name: RG_ring_5_45860 00:04:03.925 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:04:03.926 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_45860 00:04:03.926 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:04:03.926 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:03.926 element at address: 0x10af900940 with size: 0.500488 MiB 00:04:03.926 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:03.926 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:04:03.926 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:03.926 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:04:03.926 associated memzone info: size: 0.125366 MiB name: RG_ring_2_45860 00:04:03.926 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:04:03.926 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:03.926 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:04:03.926 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:03.926 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:04:03.926 associated memzone info: size: 0.015991 MiB name: RG_ring_3_45860 00:04:03.926 element at address: 0x10c8018080 with size: 0.002441 MiB 00:04:03.926 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:03.926 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:04:03.926 associated memzone info: size: 0.000183 MiB name: MP_msgpool_45860 00:04:03.926 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:04:03.926 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_45860 00:04:03.926 element at address: 0x10d98b65c0 with size: 0.000305 MiB 00:04:03.926 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:03.926 05:56:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:03.926 05:56:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 45860 00:04:03.926 05:56:12 -- common/autotest_common.sh@926 -- # '[' -z 45860 ']' 00:04:03.926 05:56:12 -- common/autotest_common.sh@930 -- # kill -0 45860 00:04:03.926 05:56:12 -- common/autotest_common.sh@931 -- # uname 00:04:03.926 05:56:12 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:03.926 05:56:12 -- common/autotest_common.sh@934 -- # ps -c -o command 45860 00:04:03.926 05:56:12 -- common/autotest_common.sh@934 -- # tail -1 00:04:03.926 05:56:12 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:03.926 05:56:12 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:03.926 killing process with pid 45860 00:04:03.926 05:56:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45860' 00:04:03.926 05:56:12 -- common/autotest_common.sh@945 -- # kill 45860 00:04:03.926 05:56:12 -- common/autotest_common.sh@950 -- # wait 45860 00:04:04.186 00:04:04.186 real 0m1.466s 00:04:04.186 user 0m1.373s 00:04:04.186 sys 0m0.652s 00:04:04.186 05:56:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.186 05:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.186 ************************************ 00:04:04.186 END TEST dpdk_mem_utility 00:04:04.186 ************************************ 00:04:04.186 05:56:12 -- spdk/autotest.sh@187 -- # run_test event /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:04.186 05:56:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:04.186 05:56:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.186 05:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.186 ************************************ 00:04:04.186 START TEST event 00:04:04.186 ************************************ 00:04:04.186 05:56:12 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:04.186 * Looking for test storage... 00:04:04.446 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/event 00:04:04.446 05:56:12 -- event/event.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:04.446 05:56:12 -- bdev/nbd_common.sh@6 -- # set -e 00:04:04.446 05:56:12 -- event/event.sh@45 -- # run_test event_perf /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.446 05:56:12 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:04.446 05:56:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:04.446 05:56:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.446 ************************************ 00:04:04.446 START TEST event_perf 00:04:04.446 ************************************ 00:04:04.446 05:56:12 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:04.446 Running I/O for 1 seconds...[2024-05-13 05:56:12.515782] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:04.446 [2024-05-13 05:56:12.516124] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:04.706 EAL: TSC is not safe to use in SMP mode 00:04:04.706 EAL: TSC is not invariant 00:04:04.706 [2024-05-13 05:56:12.943631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:04.966 [2024-05-13 05:56:13.022662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.966 [2024-05-13 05:56:13.022952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.966 [2024-05-13 05:56:13.022812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:04.966 [2024-05-13 05:56:13.022955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.903 Running I/O for 1 seconds... 00:04:05.903 lcore 0: 1852505 00:04:05.903 lcore 1: 1852505 00:04:05.903 lcore 2: 1852502 00:04:05.903 lcore 3: 1852505 00:04:05.903 done. 00:04:05.903 00:04:05.903 real 0m1.645s 00:04:05.903 user 0m4.150s 00:04:05.903 sys 0m0.490s 00:04:05.903 05:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.903 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.903 ************************************ 00:04:05.903 END TEST event_perf 00:04:05.903 ************************************ 00:04:05.903 05:56:14 -- event/event.sh@46 -- # run_test event_reactor /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:05.903 05:56:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:05.903 05:56:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.903 05:56:14 -- common/autotest_common.sh@10 -- # set +x 00:04:05.903 ************************************ 00:04:05.903 START TEST event_reactor 00:04:05.903 ************************************ 00:04:05.903 05:56:14 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:06.162 [2024-05-13 05:56:14.219261] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:06.163 [2024-05-13 05:56:14.219617] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:06.422 EAL: TSC is not safe to use in SMP mode 00:04:06.422 EAL: TSC is not invariant 00:04:06.422 [2024-05-13 05:56:14.659989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.681 [2024-05-13 05:56:14.783526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.694 test_start 00:04:07.694 oneshot 00:04:07.694 tick 100 00:04:07.694 tick 100 00:04:07.694 tick 250 00:04:07.694 tick 100 00:04:07.694 tick 100 00:04:07.694 tick 100 00:04:07.694 tick 250 00:04:07.694 tick 500 00:04:07.694 tick 100 00:04:07.694 tick 100 00:04:07.694 tick 250 00:04:07.694 tick 100 00:04:07.694 tick 100 00:04:07.694 test_end 00:04:07.694 00:04:07.694 real 0m1.662s 00:04:07.694 user 0m1.194s 00:04:07.694 sys 0m0.466s 00:04:07.694 05:56:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.694 05:56:15 -- common/autotest_common.sh@10 -- # set +x 00:04:07.694 ************************************ 00:04:07.694 END TEST event_reactor 00:04:07.694 ************************************ 00:04:07.694 05:56:15 -- event/event.sh@47 -- # run_test event_reactor_perf /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.694 05:56:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:07.694 05:56:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.694 05:56:15 -- common/autotest_common.sh@10 -- # set +x 00:04:07.694 ************************************ 00:04:07.694 START TEST event_reactor_perf 00:04:07.694 ************************************ 00:04:07.694 05:56:15 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:07.694 [2024-05-13 05:56:15.930819] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:07.694 [2024-05-13 05:56:15.931112] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:08.263 EAL: TSC is not safe to use in SMP mode 00:04:08.263 EAL: TSC is not invariant 00:04:08.263 [2024-05-13 05:56:16.356061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.263 [2024-05-13 05:56:16.446140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.642 test_start 00:04:09.642 test_end 00:04:09.642 Performance: 4838877 events per second 00:04:09.642 00:04:09.642 real 0m1.613s 00:04:09.642 user 0m1.160s 00:04:09.643 sys 0m0.454s 00:04:09.643 05:56:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.643 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.643 ************************************ 00:04:09.643 END TEST event_reactor_perf 00:04:09.643 ************************************ 00:04:09.643 05:56:17 -- event/event.sh@49 -- # uname -s 00:04:09.643 05:56:17 -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:09.643 00:04:09.643 real 0m5.274s 00:04:09.643 user 0m6.692s 00:04:09.643 sys 0m1.626s 00:04:09.643 05:56:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.643 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.643 ************************************ 00:04:09.643 END TEST event 00:04:09.643 ************************************ 00:04:09.643 05:56:17 -- spdk/autotest.sh@188 -- # run_test thread /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:09.643 05:56:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.643 05:56:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.643 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.643 ************************************ 00:04:09.643 START TEST thread 00:04:09.643 ************************************ 00:04:09.643 05:56:17 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:09.643 * Looking for test storage... 00:04:09.643 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/thread 00:04:09.643 05:56:17 -- thread/thread.sh@11 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:09.643 05:56:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:04:09.643 05:56:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.643 05:56:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.643 ************************************ 00:04:09.643 START TEST thread_poller_perf 00:04:09.643 ************************************ 00:04:09.643 05:56:17 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:09.643 [2024-05-13 05:56:17.830313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:09.643 [2024-05-13 05:56:17.830578] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:10.229 EAL: TSC is not safe to use in SMP mode 00:04:10.229 EAL: TSC is not invariant 00:04:10.229 [2024-05-13 05:56:18.249034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.229 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:10.229 [2024-05-13 05:56:18.325572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.167 ====================================== 00:04:11.167 busy:2296486906 (cyc) 00:04:11.167 total_run_count: 7363000 00:04:11.167 tsc_hz: 2294600415 (cyc) 00:04:11.167 ====================================== 00:04:11.167 poller_cost: 311 (cyc), 135 (nsec) 00:04:11.167 00:04:11.167 real 0m1.591s 00:04:11.167 user 0m1.142s 00:04:11.167 sys 0m0.447s 00:04:11.167 05:56:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.167 05:56:19 -- common/autotest_common.sh@10 -- # set +x 00:04:11.167 ************************************ 00:04:11.167 END TEST thread_poller_perf 00:04:11.167 ************************************ 00:04:11.167 05:56:19 -- thread/thread.sh@12 -- # run_test thread_poller_perf /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:11.167 05:56:19 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:04:11.167 05:56:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.167 05:56:19 -- common/autotest_common.sh@10 -- # set +x 00:04:11.167 ************************************ 00:04:11.167 START TEST thread_poller_perf 00:04:11.167 ************************************ 00:04:11.167 05:56:19 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:11.167 [2024-05-13 05:56:19.470958] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:11.167 [2024-05-13 05:56:19.471286] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:11.736 EAL: TSC is not safe to use in SMP mode 00:04:11.736 EAL: TSC is not invariant 00:04:11.736 [2024-05-13 05:56:19.899125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.736 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:11.736 [2024-05-13 05:56:19.988459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.117 ====================================== 00:04:13.117 busy:2295861204 (cyc) 00:04:13.117 total_run_count: 102575000 00:04:13.117 tsc_hz: 2294600415 (cyc) 00:04:13.117 ====================================== 00:04:13.117 poller_cost: 22 (cyc), 9 (nsec) 00:04:13.117 00:04:13.117 real 0m1.616s 00:04:13.117 user 0m1.153s 00:04:13.117 sys 0m0.464s 00:04:13.117 05:56:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.117 05:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.117 ************************************ 00:04:13.117 END TEST thread_poller_perf 00:04:13.117 ************************************ 00:04:13.117 05:56:21 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:13.117 05:56:21 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:13.117 05:56:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.117 05:56:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.117 05:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.117 ************************************ 00:04:13.117 START TEST thread_spdk_lock 00:04:13.117 ************************************ 00:04:13.117 05:56:21 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:13.117 [2024-05-13 05:56:21.135834] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:13.117 [2024-05-13 05:56:21.136220] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:13.377 EAL: TSC is not safe to use in SMP mode 00:04:13.377 EAL: TSC is not invariant 00:04:13.377 [2024-05-13 05:56:21.580345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.377 [2024-05-13 05:56:21.670126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.377 [2024-05-13 05:56:21.670127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.946 [2024-05-13 05:56:22.103341] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:13.946 [2024-05-13 05:56:22.103408] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:13.946 [2024-05-13 05:56:22.103415] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x30ee20 00:04:13.946 [2024-05-13 05:56:22.103755] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:13.946 [2024-05-13 05:56:22.103855] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:13.946 [2024-05-13 05:56:22.103862] /usr/home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:13.946 Starting test contend 00:04:13.946 Worker Delay Wait us Hold us Total us 00:04:13.946 0 3 260117 161173 421290 00:04:13.946 1 5 160725 261544 422270 00:04:13.946 PASS test contend 00:04:13.946 Starting test hold_by_poller 00:04:13.946 PASS test hold_by_poller 00:04:13.946 Starting test hold_by_message 00:04:13.946 PASS test hold_by_message 00:04:13.946 /usr/home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:13.946 100014 assertions passed 00:04:13.946 0 assertions failed 00:04:13.946 00:04:13.946 real 0m1.063s 00:04:13.946 user 0m1.013s 00:04:13.946 sys 0m0.479s 00:04:13.946 05:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.946 05:56:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.946 ************************************ 00:04:13.946 END TEST thread_spdk_lock 00:04:13.946 ************************************ 00:04:13.946 00:04:13.946 real 0m4.589s 00:04:13.946 user 0m3.489s 00:04:13.946 sys 0m1.581s 00:04:13.946 05:56:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.946 05:56:22 -- common/autotest_common.sh@10 -- # set +x 00:04:13.946 ************************************ 00:04:13.946 END TEST thread 00:04:13.946 ************************************ 00:04:14.206 05:56:22 -- spdk/autotest.sh@189 -- # run_test accel /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:14.206 05:56:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.206 05:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.206 05:56:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.206 ************************************ 00:04:14.206 START TEST accel 00:04:14.206 ************************************ 00:04:14.206 05:56:22 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:14.206 * Looking for test storage... 00:04:14.206 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:04:14.206 05:56:22 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:04:14.206 05:56:22 -- accel/accel.sh@74 -- # get_expected_opcs 00:04:14.206 05:56:22 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:14.206 05:56:22 -- accel/accel.sh@59 -- # spdk_tgt_pid=46113 00:04:14.206 05:56:22 -- accel/accel.sh@60 -- # waitforlisten 46113 00:04:14.206 05:56:22 -- common/autotest_common.sh@819 -- # '[' -z 46113 ']' 00:04:14.206 05:56:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.206 05:56:22 -- accel/accel.sh@58 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.U9NClJ 00:04:14.206 05:56:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:14.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.206 05:56:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.206 05:56:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:14.206 05:56:22 -- common/autotest_common.sh@10 -- # set +x 00:04:14.206 [2024-05-13 05:56:22.448466] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:14.206 [2024-05-13 05:56:22.448814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:14.774 EAL: TSC is not safe to use in SMP mode 00:04:14.774 EAL: TSC is not invariant 00:04:14.774 [2024-05-13 05:56:22.868449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.774 [2024-05-13 05:56:22.958628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:14.774 [2024-05-13 05:56:22.958713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.774 05:56:22 -- accel/accel.sh@58 -- # build_accel_config 00:04:14.774 05:56:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:14.774 05:56:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:14.774 05:56:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:14.774 05:56:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:14.774 05:56:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:14.774 05:56:22 -- accel/accel.sh@41 -- # local IFS=, 00:04:14.774 05:56:22 -- accel/accel.sh@42 -- # jq -r . 00:04:15.033 05:56:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:15.033 05:56:23 -- common/autotest_common.sh@852 -- # return 0 00:04:15.033 05:56:23 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:15.033 05:56:23 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:04:15.033 05:56:23 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:15.033 05:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:15.033 05:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:15.033 05:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # IFS== 00:04:15.033 05:56:23 -- accel/accel.sh@64 -- # read -r opc module 00:04:15.033 05:56:23 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:15.033 05:56:23 -- accel/accel.sh@67 -- # killprocess 46113 00:04:15.033 05:56:23 -- common/autotest_common.sh@926 -- # '[' -z 46113 ']' 00:04:15.033 05:56:23 -- common/autotest_common.sh@930 -- # kill -0 46113 00:04:15.033 05:56:23 -- common/autotest_common.sh@931 -- # uname 00:04:15.033 05:56:23 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:04:15.033 05:56:23 -- common/autotest_common.sh@934 -- # ps -c -o command 46113 00:04:15.033 05:56:23 -- common/autotest_common.sh@934 -- # tail -1 00:04:15.034 05:56:23 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:04:15.034 05:56:23 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:04:15.034 killing process with pid 46113 00:04:15.034 05:56:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46113' 00:04:15.034 05:56:23 -- common/autotest_common.sh@945 -- # kill 46113 00:04:15.034 05:56:23 -- common/autotest_common.sh@950 -- # wait 46113 00:04:15.346 05:56:23 -- accel/accel.sh@68 -- # trap - ERR 00:04:15.346 05:56:23 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:04:15.346 05:56:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:04:15.346 05:56:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.346 05:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:15.346 05:56:23 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:04:15.346 05:56:23 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.01PKlm -h 00:04:15.346 05:56:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.346 05:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:15.346 05:56:23 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:15.346 05:56:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:15.346 05:56:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.346 05:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:15.346 ************************************ 00:04:15.346 START TEST accel_missing_filename 00:04:15.346 ************************************ 00:04:15.346 05:56:23 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:04:15.346 05:56:23 -- common/autotest_common.sh@640 -- # local es=0 00:04:15.346 05:56:23 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:15.347 05:56:23 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:15.347 05:56:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:15.347 05:56:23 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:15.347 05:56:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:15.347 05:56:23 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:04:15.347 05:56:23 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8dvWrB -t 1 -w compress 00:04:15.347 [2024-05-13 05:56:23.610087] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:15.347 [2024-05-13 05:56:23.610458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:15.913 EAL: TSC is not safe to use in SMP mode 00:04:15.913 EAL: TSC is not invariant 00:04:15.913 [2024-05-13 05:56:24.023869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.913 [2024-05-13 05:56:24.096490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.913 05:56:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:15.913 05:56:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:15.913 05:56:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:15.913 05:56:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:15.913 05:56:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:15.913 05:56:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:15.913 05:56:24 -- accel/accel.sh@41 -- # local IFS=, 00:04:15.913 05:56:24 -- accel/accel.sh@42 -- # jq -r . 00:04:15.913 [2024-05-13 05:56:24.110842] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:15.913 [2024-05-13 05:56:24.138695] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:16.172 A filename is required. 00:04:16.172 05:56:24 -- common/autotest_common.sh@643 -- # es=234 00:04:16.172 05:56:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:16.172 05:56:24 -- common/autotest_common.sh@652 -- # es=106 00:04:16.172 05:56:24 -- common/autotest_common.sh@653 -- # case "$es" in 00:04:16.172 05:56:24 -- common/autotest_common.sh@660 -- # es=1 00:04:16.172 05:56:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:16.172 00:04:16.172 real 0m0.629s 00:04:16.172 user 0m0.159s 00:04:16.172 sys 0m0.470s 00:04:16.172 05:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.172 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.172 ************************************ 00:04:16.172 END TEST accel_missing_filename 00:04:16.172 ************************************ 00:04:16.172 05:56:24 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:16.172 05:56:24 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:04:16.172 05:56:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.172 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.172 ************************************ 00:04:16.172 START TEST accel_compress_verify 00:04:16.172 ************************************ 00:04:16.172 05:56:24 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:16.172 05:56:24 -- common/autotest_common.sh@640 -- # local es=0 00:04:16.172 05:56:24 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:16.172 05:56:24 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:16.172 05:56:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:16.172 05:56:24 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:16.172 05:56:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:16.172 05:56:24 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:16.172 05:56:24 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.XCYTf3 -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:16.172 [2024-05-13 05:56:24.296745] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:16.172 [2024-05-13 05:56:24.297102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:16.432 EAL: TSC is not safe to use in SMP mode 00:04:16.432 EAL: TSC is not invariant 00:04:16.432 [2024-05-13 05:56:24.708153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.692 [2024-05-13 05:56:24.782449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.692 05:56:24 -- accel/accel.sh@12 -- # build_accel_config 00:04:16.692 05:56:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:16.692 05:56:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:16.692 05:56:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:16.692 05:56:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:16.692 05:56:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:16.692 05:56:24 -- accel/accel.sh@41 -- # local IFS=, 00:04:16.692 05:56:24 -- accel/accel.sh@42 -- # jq -r . 00:04:16.692 [2024-05-13 05:56:24.796666] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:16.692 [2024-05-13 05:56:24.824762] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:16.692 00:04:16.692 Compression does not support the verify option, aborting. 00:04:16.692 05:56:24 -- common/autotest_common.sh@643 -- # es=211 00:04:16.692 05:56:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:16.692 05:56:24 -- common/autotest_common.sh@652 -- # es=83 00:04:16.692 05:56:24 -- common/autotest_common.sh@653 -- # case "$es" in 00:04:16.692 05:56:24 -- common/autotest_common.sh@660 -- # es=1 00:04:16.692 05:56:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:16.692 00:04:16.692 real 0m0.629s 00:04:16.692 user 0m0.158s 00:04:16.692 sys 0m0.467s 00:04:16.692 05:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.692 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.692 ************************************ 00:04:16.692 END TEST accel_compress_verify 00:04:16.692 ************************************ 00:04:16.692 05:56:24 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:16.692 05:56:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:16.692 05:56:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.692 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.692 ************************************ 00:04:16.692 START TEST accel_wrong_workload 00:04:16.692 ************************************ 00:04:16.692 05:56:24 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:04:16.692 05:56:24 -- common/autotest_common.sh@640 -- # local es=0 00:04:16.692 05:56:24 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:16.692 05:56:24 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:16.692 05:56:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:16.692 05:56:24 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:16.692 05:56:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:16.692 05:56:24 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:04:16.692 05:56:24 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8n4soX -t 1 -w foobar 00:04:16.692 Unsupported workload type: foobar 00:04:16.692 [2024-05-13 05:56:24.979991] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:16.692 accel_perf options: 00:04:16.692 [-h help message] 00:04:16.692 [-q queue depth per core] 00:04:16.692 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:16.692 [-T number of threads per core 00:04:16.692 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:16.692 [-t time in seconds] 00:04:16.692 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:16.692 [ dif_verify, , dif_generate, dif_generate_copy 00:04:16.692 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:16.692 [-l for compress/decompress workloads, name of uncompressed input file 00:04:16.692 [-S for crc32c workload, use this seed value (default 0) 00:04:16.692 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:16.692 [-f for fill workload, use this BYTE value (default 255) 00:04:16.692 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:16.692 [-y verify result if this switch is on] 00:04:16.692 [-a tasks to allocate per core (default: same value as -q)] 00:04:16.692 Can be used to spread operations across a wider range of memory. 00:04:16.692 05:56:24 -- common/autotest_common.sh@643 -- # es=1 00:04:16.692 05:56:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:16.692 05:56:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:16.692 05:56:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:16.692 00:04:16.692 real 0m0.015s 00:04:16.692 user 0m0.009s 00:04:16.692 sys 0m0.004s 00:04:16.692 05:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.692 05:56:24 -- common/autotest_common.sh@10 -- # set +x 00:04:16.692 ************************************ 00:04:16.692 END TEST accel_wrong_workload 00:04:16.692 ************************************ 00:04:16.952 05:56:25 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:16.952 05:56:25 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:04:16.952 05:56:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.952 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:16.952 ************************************ 00:04:16.952 START TEST accel_negative_buffers 00:04:16.952 ************************************ 00:04:16.952 05:56:25 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:16.952 05:56:25 -- common/autotest_common.sh@640 -- # local es=0 00:04:16.952 05:56:25 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:16.952 05:56:25 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:16.952 05:56:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:16.952 05:56:25 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:16.952 05:56:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:16.952 05:56:25 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:04:16.952 05:56:25 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.093bS3 -t 1 -w xor -y -x -1 00:04:16.952 -x option must be non-negative. 00:04:16.952 [2024-05-13 05:56:25.052980] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:16.952 accel_perf options: 00:04:16.952 [-h help message] 00:04:16.952 [-q queue depth per core] 00:04:16.952 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:16.952 [-T number of threads per core 00:04:16.952 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:16.952 [-t time in seconds] 00:04:16.952 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:16.952 [ dif_verify, , dif_generate, dif_generate_copy 00:04:16.952 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:16.952 [-l for compress/decompress workloads, name of uncompressed input file 00:04:16.952 [-S for crc32c workload, use this seed value (default 0) 00:04:16.952 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:16.952 [-f for fill workload, use this BYTE value (default 255) 00:04:16.952 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:16.952 [-y verify result if this switch is on] 00:04:16.952 [-a tasks to allocate per core (default: same value as -q)] 00:04:16.952 Can be used to spread operations across a wider range of memory. 00:04:16.952 05:56:25 -- common/autotest_common.sh@643 -- # es=1 00:04:16.952 05:56:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:16.952 05:56:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:16.952 05:56:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:16.952 00:04:16.952 real 0m0.015s 00:04:16.952 user 0m0.009s 00:04:16.952 sys 0m0.007s 00:04:16.952 05:56:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.952 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:16.952 ************************************ 00:04:16.952 END TEST accel_negative_buffers 00:04:16.952 ************************************ 00:04:16.952 05:56:25 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:16.952 05:56:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:16.952 05:56:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.952 05:56:25 -- common/autotest_common.sh@10 -- # set +x 00:04:16.952 ************************************ 00:04:16.952 START TEST accel_crc32c 00:04:16.952 ************************************ 00:04:16.952 05:56:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:16.952 05:56:25 -- accel/accel.sh@16 -- # local accel_opc 00:04:16.952 05:56:25 -- accel/accel.sh@17 -- # local accel_module 00:04:16.952 05:56:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:16.952 05:56:25 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZN4MQY -t 1 -w crc32c -S 32 -y 00:04:16.952 [2024-05-13 05:56:25.127936] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:16.952 [2024-05-13 05:56:25.128289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:17.520 EAL: TSC is not safe to use in SMP mode 00:04:17.520 EAL: TSC is not invariant 00:04:17.520 [2024-05-13 05:56:25.573388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.520 [2024-05-13 05:56:25.657234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.520 05:56:25 -- accel/accel.sh@12 -- # build_accel_config 00:04:17.520 05:56:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:17.520 05:56:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:17.520 05:56:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:17.520 05:56:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:17.520 05:56:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:17.520 05:56:25 -- accel/accel.sh@41 -- # local IFS=, 00:04:17.520 05:56:25 -- accel/accel.sh@42 -- # jq -r . 00:04:18.899 05:56:26 -- accel/accel.sh@18 -- # out=' 00:04:18.899 SPDK Configuration: 00:04:18.899 Core mask: 0x1 00:04:18.899 00:04:18.899 Accel Perf Configuration: 00:04:18.899 Workload Type: crc32c 00:04:18.899 CRC-32C seed: 32 00:04:18.899 Transfer size: 4096 bytes 00:04:18.899 Vector count 1 00:04:18.899 Module: software 00:04:18.899 Queue depth: 32 00:04:18.899 Allocate depth: 32 00:04:18.899 # threads/core: 1 00:04:18.899 Run time: 1 seconds 00:04:18.899 Verify: Yes 00:04:18.899 00:04:18.899 Running for 1 seconds... 00:04:18.899 00:04:18.899 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:18.899 ------------------------------------------------------------------------------------ 00:04:18.899 0,0 2762144/s 10789 MiB/s 0 0 00:04:18.899 ==================================================================================== 00:04:18.899 Total 2762144/s 10789 MiB/s 0 0' 00:04:18.899 05:56:26 -- accel/accel.sh@20 -- # IFS=: 00:04:18.899 05:56:26 -- accel/accel.sh@20 -- # read -r var val 00:04:18.899 05:56:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:18.899 05:56:26 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wg6bNL -t 1 -w crc32c -S 32 -y 00:04:18.899 [2024-05-13 05:56:26.801085] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:18.899 [2024-05-13 05:56:26.801409] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:19.158 EAL: TSC is not safe to use in SMP mode 00:04:19.158 EAL: TSC is not invariant 00:04:19.158 [2024-05-13 05:56:27.217445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.158 [2024-05-13 05:56:27.304192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.158 05:56:27 -- accel/accel.sh@12 -- # build_accel_config 00:04:19.158 05:56:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:19.158 05:56:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:19.158 05:56:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:19.158 05:56:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:19.158 05:56:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:19.158 05:56:27 -- accel/accel.sh@41 -- # local IFS=, 00:04:19.158 05:56:27 -- accel/accel.sh@42 -- # jq -r . 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val=0x1 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val=crc32c 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val=32 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.158 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.158 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.158 05:56:27 -- accel/accel.sh@21 -- # val=software 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@23 -- # accel_module=software 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val=32 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val=32 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val=1 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val=Yes 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:19.159 05:56:27 -- accel/accel.sh@21 -- # val= 00:04:19.159 05:56:27 -- accel/accel.sh@22 -- # case "$var" in 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # IFS=: 00:04:19.159 05:56:27 -- accel/accel.sh@20 -- # read -r var val 00:04:20.536 05:56:28 -- accel/accel.sh@21 -- # val= 00:04:20.536 05:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # IFS=: 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # read -r var val 00:04:20.536 05:56:28 -- accel/accel.sh@21 -- # val= 00:04:20.536 05:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # IFS=: 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # read -r var val 00:04:20.536 05:56:28 -- accel/accel.sh@21 -- # val= 00:04:20.536 05:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # IFS=: 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # read -r var val 00:04:20.536 05:56:28 -- accel/accel.sh@21 -- # val= 00:04:20.536 05:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # IFS=: 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # read -r var val 00:04:20.536 05:56:28 -- accel/accel.sh@21 -- # val= 00:04:20.536 05:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # IFS=: 00:04:20.536 05:56:28 -- accel/accel.sh@20 -- # read -r var val 00:04:20.536 05:56:28 -- accel/accel.sh@21 -- # val= 00:04:20.537 05:56:28 -- accel/accel.sh@22 -- # case "$var" in 00:04:20.537 05:56:28 -- accel/accel.sh@20 -- # IFS=: 00:04:20.537 05:56:28 -- accel/accel.sh@20 -- # read -r var val 00:04:20.537 05:56:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:20.537 05:56:28 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:20.537 05:56:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:20.537 00:04:20.537 real 0m3.316s 00:04:20.537 user 0m2.383s 00:04:20.537 sys 0m0.939s 00:04:20.537 05:56:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.537 05:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.537 ************************************ 00:04:20.537 END TEST accel_crc32c 00:04:20.537 ************************************ 00:04:20.537 05:56:28 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:20.537 05:56:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:20.537 05:56:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.537 05:56:28 -- common/autotest_common.sh@10 -- # set +x 00:04:20.537 ************************************ 00:04:20.537 START TEST accel_crc32c_C2 00:04:20.537 ************************************ 00:04:20.537 05:56:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:20.537 05:56:28 -- accel/accel.sh@16 -- # local accel_opc 00:04:20.537 05:56:28 -- accel/accel.sh@17 -- # local accel_module 00:04:20.537 05:56:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:20.537 05:56:28 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.OYok18 -t 1 -w crc32c -y -C 2 00:04:20.537 [2024-05-13 05:56:28.493426] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:20.537 [2024-05-13 05:56:28.493795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:20.796 EAL: TSC is not safe to use in SMP mode 00:04:20.796 EAL: TSC is not invariant 00:04:20.796 [2024-05-13 05:56:28.915569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.796 [2024-05-13 05:56:29.002228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.796 05:56:29 -- accel/accel.sh@12 -- # build_accel_config 00:04:20.796 05:56:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:20.796 05:56:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:20.796 05:56:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:20.796 05:56:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:20.796 05:56:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:20.796 05:56:29 -- accel/accel.sh@41 -- # local IFS=, 00:04:20.796 05:56:29 -- accel/accel.sh@42 -- # jq -r . 00:04:22.175 05:56:30 -- accel/accel.sh@18 -- # out=' 00:04:22.175 SPDK Configuration: 00:04:22.175 Core mask: 0x1 00:04:22.175 00:04:22.175 Accel Perf Configuration: 00:04:22.175 Workload Type: crc32c 00:04:22.175 CRC-32C seed: 0 00:04:22.175 Transfer size: 4096 bytes 00:04:22.175 Vector count 2 00:04:22.175 Module: software 00:04:22.175 Queue depth: 32 00:04:22.175 Allocate depth: 32 00:04:22.175 # threads/core: 1 00:04:22.175 Run time: 1 seconds 00:04:22.175 Verify: Yes 00:04:22.175 00:04:22.175 Running for 1 seconds... 00:04:22.175 00:04:22.175 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:22.175 ------------------------------------------------------------------------------------ 00:04:22.175 0,0 1478880/s 11553 MiB/s 0 0 00:04:22.175 ==================================================================================== 00:04:22.175 Total 1478880/s 5776 MiB/s 0 0' 00:04:22.175 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.175 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.175 05:56:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:22.175 05:56:30 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.N0rTJE -t 1 -w crc32c -y -C 2 00:04:22.175 [2024-05-13 05:56:30.144901] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:22.175 [2024-05-13 05:56:30.145275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:22.437 EAL: TSC is not safe to use in SMP mode 00:04:22.437 EAL: TSC is not invariant 00:04:22.437 [2024-05-13 05:56:30.575904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.437 [2024-05-13 05:56:30.662644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.437 05:56:30 -- accel/accel.sh@12 -- # build_accel_config 00:04:22.437 05:56:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:22.437 05:56:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:22.437 05:56:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:22.437 05:56:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:22.437 05:56:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:22.437 05:56:30 -- accel/accel.sh@41 -- # local IFS=, 00:04:22.437 05:56:30 -- accel/accel.sh@42 -- # jq -r . 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=0x1 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=crc32c 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=0 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=software 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@23 -- # accel_module=software 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=32 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=32 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val=1 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.437 05:56:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:22.437 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.437 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.438 05:56:30 -- accel/accel.sh@21 -- # val=Yes 00:04:22.438 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.438 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.438 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.438 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.438 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.438 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.438 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:22.438 05:56:30 -- accel/accel.sh@21 -- # val= 00:04:22.438 05:56:30 -- accel/accel.sh@22 -- # case "$var" in 00:04:22.438 05:56:30 -- accel/accel.sh@20 -- # IFS=: 00:04:22.438 05:56:30 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@21 -- # val= 00:04:23.822 05:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # IFS=: 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@21 -- # val= 00:04:23.822 05:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # IFS=: 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@21 -- # val= 00:04:23.822 05:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # IFS=: 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@21 -- # val= 00:04:23.822 05:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # IFS=: 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@21 -- # val= 00:04:23.822 05:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # IFS=: 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@21 -- # val= 00:04:23.822 05:56:31 -- accel/accel.sh@22 -- # case "$var" in 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # IFS=: 00:04:23.822 05:56:31 -- accel/accel.sh@20 -- # read -r var val 00:04:23.822 05:56:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:23.822 05:56:31 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:04:23.822 05:56:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:23.822 00:04:23.822 real 0m3.318s 00:04:23.822 user 0m2.378s 00:04:23.822 sys 0m0.953s 00:04:23.822 05:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.822 05:56:31 -- common/autotest_common.sh@10 -- # set +x 00:04:23.822 ************************************ 00:04:23.822 END TEST accel_crc32c_C2 00:04:23.822 ************************************ 00:04:23.822 05:56:31 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:23.822 05:56:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:23.822 05:56:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.822 05:56:31 -- common/autotest_common.sh@10 -- # set +x 00:04:23.822 ************************************ 00:04:23.822 START TEST accel_copy 00:04:23.822 ************************************ 00:04:23.822 05:56:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:04:23.822 05:56:31 -- accel/accel.sh@16 -- # local accel_opc 00:04:23.822 05:56:31 -- accel/accel.sh@17 -- # local accel_module 00:04:23.822 05:56:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:04:23.822 05:56:31 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.0J9Jty -t 1 -w copy -y 00:04:23.822 [2024-05-13 05:56:31.867623] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:23.822 [2024-05-13 05:56:31.867972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:24.082 EAL: TSC is not safe to use in SMP mode 00:04:24.082 EAL: TSC is not invariant 00:04:24.082 [2024-05-13 05:56:32.292834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.082 [2024-05-13 05:56:32.378706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.082 05:56:32 -- accel/accel.sh@12 -- # build_accel_config 00:04:24.082 05:56:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:24.082 05:56:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:24.082 05:56:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:24.082 05:56:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:24.082 05:56:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:24.082 05:56:32 -- accel/accel.sh@41 -- # local IFS=, 00:04:24.082 05:56:32 -- accel/accel.sh@42 -- # jq -r . 00:04:25.461 05:56:33 -- accel/accel.sh@18 -- # out=' 00:04:25.461 SPDK Configuration: 00:04:25.461 Core mask: 0x1 00:04:25.461 00:04:25.461 Accel Perf Configuration: 00:04:25.461 Workload Type: copy 00:04:25.461 Transfer size: 4096 bytes 00:04:25.461 Vector count 1 00:04:25.461 Module: software 00:04:25.461 Queue depth: 32 00:04:25.461 Allocate depth: 32 00:04:25.461 # threads/core: 1 00:04:25.461 Run time: 1 seconds 00:04:25.461 Verify: Yes 00:04:25.461 00:04:25.461 Running for 1 seconds... 00:04:25.461 00:04:25.461 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:25.461 ------------------------------------------------------------------------------------ 00:04:25.461 0,0 2678144/s 10461 MiB/s 0 0 00:04:25.461 ==================================================================================== 00:04:25.461 Total 2678144/s 10461 MiB/s 0 0' 00:04:25.461 05:56:33 -- accel/accel.sh@20 -- # IFS=: 00:04:25.461 05:56:33 -- accel/accel.sh@20 -- # read -r var val 00:04:25.461 05:56:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:25.461 05:56:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tUmWwj -t 1 -w copy -y 00:04:25.461 [2024-05-13 05:56:33.507967] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:25.461 [2024-05-13 05:56:33.508098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:25.721 EAL: TSC is not safe to use in SMP mode 00:04:25.721 EAL: TSC is not invariant 00:04:25.721 [2024-05-13 05:56:33.915754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.721 [2024-05-13 05:56:33.988004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.721 05:56:33 -- accel/accel.sh@12 -- # build_accel_config 00:04:25.721 05:56:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:25.721 05:56:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:25.721 05:56:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:25.721 05:56:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:25.721 05:56:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:25.721 05:56:33 -- accel/accel.sh@41 -- # local IFS=, 00:04:25.721 05:56:33 -- accel/accel.sh@42 -- # jq -r . 00:04:25.721 05:56:33 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:33 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:33 -- accel/accel.sh@21 -- # val=0x1 00:04:25.721 05:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:33 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:33 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:33 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val=copy 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@24 -- # accel_opc=copy 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val=software 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@23 -- # accel_module=software 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val=32 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val=32 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val=1 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val=Yes 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:25.721 05:56:34 -- accel/accel.sh@21 -- # val= 00:04:25.721 05:56:34 -- accel/accel.sh@22 -- # case "$var" in 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # IFS=: 00:04:25.721 05:56:34 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@21 -- # val= 00:04:27.098 05:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # IFS=: 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@21 -- # val= 00:04:27.098 05:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # IFS=: 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@21 -- # val= 00:04:27.098 05:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # IFS=: 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@21 -- # val= 00:04:27.098 05:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # IFS=: 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@21 -- # val= 00:04:27.098 05:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # IFS=: 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@21 -- # val= 00:04:27.098 05:56:35 -- accel/accel.sh@22 -- # case "$var" in 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # IFS=: 00:04:27.098 05:56:35 -- accel/accel.sh@20 -- # read -r var val 00:04:27.098 05:56:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:27.098 05:56:35 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:04:27.098 05:56:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:27.098 00:04:27.098 real 0m3.262s 00:04:27.098 user 0m2.347s 00:04:27.098 sys 0m0.929s 00:04:27.098 05:56:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.098 05:56:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.098 ************************************ 00:04:27.098 END TEST accel_copy 00:04:27.098 ************************************ 00:04:27.098 05:56:35 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:27.098 05:56:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:04:27.098 05:56:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.098 05:56:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.098 ************************************ 00:04:27.098 START TEST accel_fill 00:04:27.098 ************************************ 00:04:27.098 05:56:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:27.098 05:56:35 -- accel/accel.sh@16 -- # local accel_opc 00:04:27.098 05:56:35 -- accel/accel.sh@17 -- # local accel_module 00:04:27.098 05:56:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:27.098 05:56:35 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xJlZS0 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:27.098 [2024-05-13 05:56:35.177647] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:27.098 [2024-05-13 05:56:35.178026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:27.358 EAL: TSC is not safe to use in SMP mode 00:04:27.358 EAL: TSC is not invariant 00:04:27.358 [2024-05-13 05:56:35.603347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.618 [2024-05-13 05:56:35.690271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.618 05:56:35 -- accel/accel.sh@12 -- # build_accel_config 00:04:27.618 05:56:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:27.618 05:56:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:27.618 05:56:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:27.618 05:56:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:27.618 05:56:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:27.618 05:56:35 -- accel/accel.sh@41 -- # local IFS=, 00:04:27.618 05:56:35 -- accel/accel.sh@42 -- # jq -r . 00:04:28.557 05:56:36 -- accel/accel.sh@18 -- # out=' 00:04:28.557 SPDK Configuration: 00:04:28.557 Core mask: 0x1 00:04:28.557 00:04:28.557 Accel Perf Configuration: 00:04:28.557 Workload Type: fill 00:04:28.557 Fill pattern: 0x80 00:04:28.557 Transfer size: 4096 bytes 00:04:28.557 Vector count 1 00:04:28.557 Module: software 00:04:28.557 Queue depth: 64 00:04:28.557 Allocate depth: 64 00:04:28.557 # threads/core: 1 00:04:28.557 Run time: 1 seconds 00:04:28.557 Verify: Yes 00:04:28.557 00:04:28.557 Running for 1 seconds... 00:04:28.557 00:04:28.557 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:28.557 ------------------------------------------------------------------------------------ 00:04:28.557 0,0 3154816/s 12323 MiB/s 0 0 00:04:28.557 ==================================================================================== 00:04:28.557 Total 3154816/s 12323 MiB/s 0 0' 00:04:28.557 05:56:36 -- accel/accel.sh@20 -- # IFS=: 00:04:28.557 05:56:36 -- accel/accel.sh@20 -- # read -r var val 00:04:28.557 05:56:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:28.557 05:56:36 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.3LnFUE -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:28.557 [2024-05-13 05:56:36.834315] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:28.557 [2024-05-13 05:56:36.834664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:29.127 EAL: TSC is not safe to use in SMP mode 00:04:29.127 EAL: TSC is not invariant 00:04:29.127 [2024-05-13 05:56:37.274400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.127 [2024-05-13 05:56:37.362970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.127 05:56:37 -- accel/accel.sh@12 -- # build_accel_config 00:04:29.127 05:56:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:29.127 05:56:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:29.127 05:56:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:29.127 05:56:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:29.127 05:56:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:29.127 05:56:37 -- accel/accel.sh@41 -- # local IFS=, 00:04:29.127 05:56:37 -- accel/accel.sh@42 -- # jq -r . 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=0x1 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=fill 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@24 -- # accel_opc=fill 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=0x80 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=software 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@23 -- # accel_module=software 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=64 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=64 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=1 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val=Yes 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:29.127 05:56:37 -- accel/accel.sh@21 -- # val= 00:04:29.127 05:56:37 -- accel/accel.sh@22 -- # case "$var" in 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # IFS=: 00:04:29.127 05:56:37 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@21 -- # val= 00:04:30.510 05:56:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # IFS=: 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@21 -- # val= 00:04:30.510 05:56:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # IFS=: 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@21 -- # val= 00:04:30.510 05:56:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # IFS=: 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@21 -- # val= 00:04:30.510 05:56:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # IFS=: 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@21 -- # val= 00:04:30.510 05:56:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # IFS=: 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@21 -- # val= 00:04:30.510 05:56:38 -- accel/accel.sh@22 -- # case "$var" in 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # IFS=: 00:04:30.510 05:56:38 -- accel/accel.sh@20 -- # read -r var val 00:04:30.510 05:56:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:30.510 05:56:38 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:04:30.510 05:56:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:30.510 00:04:30.510 real 0m3.334s 00:04:30.510 user 0m2.406s 00:04:30.510 sys 0m0.942s 00:04:30.510 05:56:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.510 05:56:38 -- common/autotest_common.sh@10 -- # set +x 00:04:30.510 ************************************ 00:04:30.510 END TEST accel_fill 00:04:30.510 ************************************ 00:04:30.510 05:56:38 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:30.510 05:56:38 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:30.510 05:56:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.510 05:56:38 -- common/autotest_common.sh@10 -- # set +x 00:04:30.510 ************************************ 00:04:30.510 START TEST accel_copy_crc32c 00:04:30.510 ************************************ 00:04:30.510 05:56:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:04:30.510 05:56:38 -- accel/accel.sh@16 -- # local accel_opc 00:04:30.510 05:56:38 -- accel/accel.sh@17 -- # local accel_module 00:04:30.510 05:56:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:30.510 05:56:38 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ZjdNdv -t 1 -w copy_crc32c -y 00:04:30.510 [2024-05-13 05:56:38.549490] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:30.510 [2024-05-13 05:56:38.549693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:30.770 EAL: TSC is not safe to use in SMP mode 00:04:30.770 EAL: TSC is not invariant 00:04:30.770 [2024-05-13 05:56:39.024305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.029 [2024-05-13 05:56:39.110952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.029 05:56:39 -- accel/accel.sh@12 -- # build_accel_config 00:04:31.029 05:56:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:31.029 05:56:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:31.029 05:56:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:31.029 05:56:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:31.029 05:56:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:31.029 05:56:39 -- accel/accel.sh@41 -- # local IFS=, 00:04:31.029 05:56:39 -- accel/accel.sh@42 -- # jq -r . 00:04:31.977 05:56:40 -- accel/accel.sh@18 -- # out=' 00:04:31.977 SPDK Configuration: 00:04:31.977 Core mask: 0x1 00:04:31.977 00:04:31.977 Accel Perf Configuration: 00:04:31.977 Workload Type: copy_crc32c 00:04:31.977 CRC-32C seed: 0 00:04:31.977 Vector size: 4096 bytes 00:04:31.977 Transfer size: 4096 bytes 00:04:31.977 Vector count 1 00:04:31.977 Module: software 00:04:31.977 Queue depth: 32 00:04:31.977 Allocate depth: 32 00:04:31.977 # threads/core: 1 00:04:31.977 Run time: 1 seconds 00:04:31.977 Verify: Yes 00:04:31.977 00:04:31.978 Running for 1 seconds... 00:04:31.978 00:04:31.978 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:31.978 ------------------------------------------------------------------------------------ 00:04:31.978 0,0 1487584/s 5810 MiB/s 0 0 00:04:31.978 ==================================================================================== 00:04:31.978 Total 1487584/s 5810 MiB/s 0 0' 00:04:31.978 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:31.978 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:31.978 05:56:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:31.978 05:56:40 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.r2R3jV -t 1 -w copy_crc32c -y 00:04:31.978 [2024-05-13 05:56:40.246048] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:31.978 [2024-05-13 05:56:40.246365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:32.547 EAL: TSC is not safe to use in SMP mode 00:04:32.547 EAL: TSC is not invariant 00:04:32.548 [2024-05-13 05:56:40.664597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.548 [2024-05-13 05:56:40.750517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.548 05:56:40 -- accel/accel.sh@12 -- # build_accel_config 00:04:32.548 05:56:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:32.548 05:56:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:32.548 05:56:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:32.548 05:56:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:32.548 05:56:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:32.548 05:56:40 -- accel/accel.sh@41 -- # local IFS=, 00:04:32.548 05:56:40 -- accel/accel.sh@42 -- # jq -r . 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=0x1 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=0 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=software 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@23 -- # accel_module=software 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=32 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=32 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=1 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val=Yes 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:32.548 05:56:40 -- accel/accel.sh@21 -- # val= 00:04:32.548 05:56:40 -- accel/accel.sh@22 -- # case "$var" in 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # IFS=: 00:04:32.548 05:56:40 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@21 -- # val= 00:04:33.925 05:56:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # IFS=: 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@21 -- # val= 00:04:33.925 05:56:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # IFS=: 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@21 -- # val= 00:04:33.925 05:56:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # IFS=: 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@21 -- # val= 00:04:33.925 05:56:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # IFS=: 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@21 -- # val= 00:04:33.925 05:56:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # IFS=: 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@21 -- # val= 00:04:33.925 05:56:41 -- accel/accel.sh@22 -- # case "$var" in 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # IFS=: 00:04:33.925 05:56:41 -- accel/accel.sh@20 -- # read -r var val 00:04:33.925 05:56:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:33.925 05:56:41 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:33.925 05:56:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:33.925 00:04:33.925 real 0m3.341s 00:04:33.925 user 0m2.376s 00:04:33.925 sys 0m0.981s 00:04:33.925 05:56:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.925 05:56:41 -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 ************************************ 00:04:33.925 END TEST accel_copy_crc32c 00:04:33.925 ************************************ 00:04:33.925 05:56:41 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:33.925 05:56:41 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:33.925 05:56:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.925 05:56:41 -- common/autotest_common.sh@10 -- # set +x 00:04:33.925 ************************************ 00:04:33.925 START TEST accel_copy_crc32c_C2 00:04:33.925 ************************************ 00:04:33.925 05:56:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:33.925 05:56:41 -- accel/accel.sh@16 -- # local accel_opc 00:04:33.925 05:56:41 -- accel/accel.sh@17 -- # local accel_module 00:04:33.925 05:56:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:33.925 05:56:41 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.C45a0O -t 1 -w copy_crc32c -y -C 2 00:04:33.925 [2024-05-13 05:56:41.952168] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:33.925 [2024-05-13 05:56:41.952513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:34.185 EAL: TSC is not safe to use in SMP mode 00:04:34.185 EAL: TSC is not invariant 00:04:34.185 [2024-05-13 05:56:42.367357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.185 [2024-05-13 05:56:42.451202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.185 05:56:42 -- accel/accel.sh@12 -- # build_accel_config 00:04:34.185 05:56:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:34.185 05:56:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:34.185 05:56:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:34.185 05:56:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:34.185 05:56:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:34.185 05:56:42 -- accel/accel.sh@41 -- # local IFS=, 00:04:34.185 05:56:42 -- accel/accel.sh@42 -- # jq -r . 00:04:35.566 05:56:43 -- accel/accel.sh@18 -- # out=' 00:04:35.566 SPDK Configuration: 00:04:35.566 Core mask: 0x1 00:04:35.566 00:04:35.566 Accel Perf Configuration: 00:04:35.566 Workload Type: copy_crc32c 00:04:35.566 CRC-32C seed: 0 00:04:35.566 Vector size: 4096 bytes 00:04:35.566 Transfer size: 8192 bytes 00:04:35.566 Vector count 2 00:04:35.566 Module: software 00:04:35.566 Queue depth: 32 00:04:35.566 Allocate depth: 32 00:04:35.566 # threads/core: 1 00:04:35.566 Run time: 1 seconds 00:04:35.566 Verify: Yes 00:04:35.566 00:04:35.566 Running for 1 seconds... 00:04:35.566 00:04:35.566 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:35.566 ------------------------------------------------------------------------------------ 00:04:35.566 0,0 792576/s 6192 MiB/s 0 0 00:04:35.566 ==================================================================================== 00:04:35.566 Total 792576/s 3096 MiB/s 0 0' 00:04:35.566 05:56:43 -- accel/accel.sh@20 -- # IFS=: 00:04:35.566 05:56:43 -- accel/accel.sh@20 -- # read -r var val 00:04:35.566 05:56:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:35.566 05:56:43 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.k2kXdS -t 1 -w copy_crc32c -y -C 2 00:04:35.566 [2024-05-13 05:56:43.595611] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:35.566 [2024-05-13 05:56:43.595973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:35.825 EAL: TSC is not safe to use in SMP mode 00:04:35.825 EAL: TSC is not invariant 00:04:35.825 [2024-05-13 05:56:44.023031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.825 [2024-05-13 05:56:44.106697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.825 05:56:44 -- accel/accel.sh@12 -- # build_accel_config 00:04:35.825 05:56:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:35.825 05:56:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:35.825 05:56:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:35.825 05:56:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:35.825 05:56:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:35.825 05:56:44 -- accel/accel.sh@41 -- # local IFS=, 00:04:35.825 05:56:44 -- accel/accel.sh@42 -- # jq -r . 00:04:35.825 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.825 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.825 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.825 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.825 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.825 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.825 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.825 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.825 05:56:44 -- accel/accel.sh@21 -- # val=0x1 00:04:35.825 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.825 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=copy_crc32c 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=0 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val='8192 bytes' 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=software 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@23 -- # accel_module=software 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=32 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=32 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=1 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val=Yes 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:35.826 05:56:44 -- accel/accel.sh@21 -- # val= 00:04:35.826 05:56:44 -- accel/accel.sh@22 -- # case "$var" in 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # IFS=: 00:04:35.826 05:56:44 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@21 -- # val= 00:04:37.207 05:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # IFS=: 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@21 -- # val= 00:04:37.207 05:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # IFS=: 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@21 -- # val= 00:04:37.207 05:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # IFS=: 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@21 -- # val= 00:04:37.207 05:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # IFS=: 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@21 -- # val= 00:04:37.207 05:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # IFS=: 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@21 -- # val= 00:04:37.207 05:56:45 -- accel/accel.sh@22 -- # case "$var" in 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # IFS=: 00:04:37.207 05:56:45 -- accel/accel.sh@20 -- # read -r var val 00:04:37.207 05:56:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:37.207 05:56:45 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:04:37.207 05:56:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:37.207 00:04:37.207 real 0m3.364s 00:04:37.207 user 0m2.445s 00:04:37.207 sys 0m0.935s 00:04:37.207 05:56:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.207 05:56:45 -- common/autotest_common.sh@10 -- # set +x 00:04:37.207 ************************************ 00:04:37.207 END TEST accel_copy_crc32c_C2 00:04:37.207 ************************************ 00:04:37.207 05:56:45 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:37.207 05:56:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:37.207 05:56:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.207 05:56:45 -- common/autotest_common.sh@10 -- # set +x 00:04:37.207 ************************************ 00:04:37.207 START TEST accel_dualcast 00:04:37.207 ************************************ 00:04:37.207 05:56:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:04:37.207 05:56:45 -- accel/accel.sh@16 -- # local accel_opc 00:04:37.207 05:56:45 -- accel/accel.sh@17 -- # local accel_module 00:04:37.207 05:56:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:04:37.207 05:56:45 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.0PJb40 -t 1 -w dualcast -y 00:04:37.207 [2024-05-13 05:56:45.362165] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:37.207 [2024-05-13 05:56:45.362461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:37.775 EAL: TSC is not safe to use in SMP mode 00:04:37.775 EAL: TSC is not invariant 00:04:37.775 [2024-05-13 05:56:45.795438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.775 [2024-05-13 05:56:45.909804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.775 05:56:45 -- accel/accel.sh@12 -- # build_accel_config 00:04:37.775 05:56:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:37.775 05:56:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:37.775 05:56:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:37.775 05:56:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:37.775 05:56:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:37.775 05:56:45 -- accel/accel.sh@41 -- # local IFS=, 00:04:37.775 05:56:45 -- accel/accel.sh@42 -- # jq -r . 00:04:39.159 05:56:47 -- accel/accel.sh@18 -- # out=' 00:04:39.159 SPDK Configuration: 00:04:39.159 Core mask: 0x1 00:04:39.159 00:04:39.159 Accel Perf Configuration: 00:04:39.159 Workload Type: dualcast 00:04:39.159 Transfer size: 4096 bytes 00:04:39.159 Vector count 1 00:04:39.159 Module: software 00:04:39.159 Queue depth: 32 00:04:39.159 Allocate depth: 32 00:04:39.159 # threads/core: 1 00:04:39.159 Run time: 1 seconds 00:04:39.159 Verify: Yes 00:04:39.159 00:04:39.159 Running for 1 seconds... 00:04:39.159 00:04:39.159 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:39.159 ------------------------------------------------------------------------------------ 00:04:39.159 0,0 1647296/s 6434 MiB/s 0 0 00:04:39.159 ==================================================================================== 00:04:39.159 Total 1647296/s 6434 MiB/s 0 0' 00:04:39.159 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.159 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.159 05:56:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:39.159 05:56:47 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.H9oxiO -t 1 -w dualcast -y 00:04:39.159 [2024-05-13 05:56:47.124722] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:39.159 [2024-05-13 05:56:47.125085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:39.419 EAL: TSC is not safe to use in SMP mode 00:04:39.419 EAL: TSC is not invariant 00:04:39.419 [2024-05-13 05:56:47.559726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.419 [2024-05-13 05:56:47.671998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.419 05:56:47 -- accel/accel.sh@12 -- # build_accel_config 00:04:39.419 05:56:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:39.419 05:56:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:39.419 05:56:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:39.419 05:56:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:39.419 05:56:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:39.419 05:56:47 -- accel/accel.sh@41 -- # local IFS=, 00:04:39.419 05:56:47 -- accel/accel.sh@42 -- # jq -r . 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=0x1 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=dualcast 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=software 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@23 -- # accel_module=software 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=32 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=32 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=1 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val=Yes 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:39.419 05:56:47 -- accel/accel.sh@21 -- # val= 00:04:39.419 05:56:47 -- accel/accel.sh@22 -- # case "$var" in 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # IFS=: 00:04:39.419 05:56:47 -- accel/accel.sh@20 -- # read -r var val 00:04:40.799 05:56:48 -- accel/accel.sh@21 -- # val= 00:04:40.799 05:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.799 05:56:48 -- accel/accel.sh@20 -- # IFS=: 00:04:40.799 05:56:48 -- accel/accel.sh@20 -- # read -r var val 00:04:40.799 05:56:48 -- accel/accel.sh@21 -- # val= 00:04:40.800 05:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # IFS=: 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # read -r var val 00:04:40.800 05:56:48 -- accel/accel.sh@21 -- # val= 00:04:40.800 05:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # IFS=: 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # read -r var val 00:04:40.800 05:56:48 -- accel/accel.sh@21 -- # val= 00:04:40.800 05:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # IFS=: 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # read -r var val 00:04:40.800 05:56:48 -- accel/accel.sh@21 -- # val= 00:04:40.800 05:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # IFS=: 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # read -r var val 00:04:40.800 05:56:48 -- accel/accel.sh@21 -- # val= 00:04:40.800 05:56:48 -- accel/accel.sh@22 -- # case "$var" in 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # IFS=: 00:04:40.800 05:56:48 -- accel/accel.sh@20 -- # read -r var val 00:04:40.800 05:56:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:40.800 05:56:48 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:04:40.800 05:56:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:40.800 00:04:40.800 real 0m3.524s 00:04:40.800 user 0m2.550s 00:04:40.800 sys 0m0.988s 00:04:40.800 05:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.800 05:56:48 -- common/autotest_common.sh@10 -- # set +x 00:04:40.800 ************************************ 00:04:40.800 END TEST accel_dualcast 00:04:40.800 ************************************ 00:04:40.800 05:56:48 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:40.800 05:56:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:40.800 05:56:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.800 05:56:48 -- common/autotest_common.sh@10 -- # set +x 00:04:40.800 ************************************ 00:04:40.800 START TEST accel_compare 00:04:40.800 ************************************ 00:04:40.800 05:56:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:04:40.800 05:56:48 -- accel/accel.sh@16 -- # local accel_opc 00:04:40.800 05:56:48 -- accel/accel.sh@17 -- # local accel_module 00:04:40.800 05:56:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:04:40.800 05:56:48 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.pxQjeo -t 1 -w compare -y 00:04:40.800 [2024-05-13 05:56:48.922907] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:40.800 [2024-05-13 05:56:48.923111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:41.059 EAL: TSC is not safe to use in SMP mode 00:04:41.059 EAL: TSC is not invariant 00:04:41.059 [2024-05-13 05:56:49.362227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.318 [2024-05-13 05:56:49.489340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.318 05:56:49 -- accel/accel.sh@12 -- # build_accel_config 00:04:41.318 05:56:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:41.318 05:56:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:41.318 05:56:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:41.318 05:56:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:41.318 05:56:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:41.318 05:56:49 -- accel/accel.sh@41 -- # local IFS=, 00:04:41.318 05:56:49 -- accel/accel.sh@42 -- # jq -r . 00:04:42.696 05:56:50 -- accel/accel.sh@18 -- # out=' 00:04:42.696 SPDK Configuration: 00:04:42.696 Core mask: 0x1 00:04:42.696 00:04:42.696 Accel Perf Configuration: 00:04:42.696 Workload Type: compare 00:04:42.696 Transfer size: 4096 bytes 00:04:42.696 Vector count 1 00:04:42.696 Module: software 00:04:42.696 Queue depth: 32 00:04:42.696 Allocate depth: 32 00:04:42.696 # threads/core: 1 00:04:42.696 Run time: 1 seconds 00:04:42.696 Verify: Yes 00:04:42.696 00:04:42.696 Running for 1 seconds... 00:04:42.696 00:04:42.696 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:42.696 ------------------------------------------------------------------------------------ 00:04:42.696 0,0 3062880/s 11964 MiB/s 0 0 00:04:42.696 ==================================================================================== 00:04:42.696 Total 3062880/s 11964 MiB/s 0 0' 00:04:42.696 05:56:50 -- accel/accel.sh@20 -- # IFS=: 00:04:42.696 05:56:50 -- accel/accel.sh@20 -- # read -r var val 00:04:42.696 05:56:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:42.696 05:56:50 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wC9qnF -t 1 -w compare -y 00:04:42.696 [2024-05-13 05:56:50.705231] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:42.696 [2024-05-13 05:56:50.705598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:42.956 EAL: TSC is not safe to use in SMP mode 00:04:42.956 EAL: TSC is not invariant 00:04:42.956 [2024-05-13 05:56:51.131764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.956 [2024-05-13 05:56:51.257095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.956 05:56:51 -- accel/accel.sh@12 -- # build_accel_config 00:04:42.956 05:56:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:42.956 05:56:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.956 05:56:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.956 05:56:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:42.956 05:56:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:42.956 05:56:51 -- accel/accel.sh@41 -- # local IFS=, 00:04:42.956 05:56:51 -- accel/accel.sh@42 -- # jq -r . 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=0x1 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=compare 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@24 -- # accel_opc=compare 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=software 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@23 -- # accel_module=software 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=32 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=32 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=1 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val=Yes 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:43.217 05:56:51 -- accel/accel.sh@21 -- # val= 00:04:43.217 05:56:51 -- accel/accel.sh@22 -- # case "$var" in 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # IFS=: 00:04:43.217 05:56:51 -- accel/accel.sh@20 -- # read -r var val 00:04:44.156 05:56:52 -- accel/accel.sh@21 -- # val= 00:04:44.156 05:56:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # IFS=: 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # read -r var val 00:04:44.156 05:56:52 -- accel/accel.sh@21 -- # val= 00:04:44.156 05:56:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # IFS=: 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # read -r var val 00:04:44.156 05:56:52 -- accel/accel.sh@21 -- # val= 00:04:44.156 05:56:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # IFS=: 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # read -r var val 00:04:44.156 05:56:52 -- accel/accel.sh@21 -- # val= 00:04:44.156 05:56:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # IFS=: 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # read -r var val 00:04:44.156 05:56:52 -- accel/accel.sh@21 -- # val= 00:04:44.156 05:56:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # IFS=: 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # read -r var val 00:04:44.156 05:56:52 -- accel/accel.sh@21 -- # val= 00:04:44.156 05:56:52 -- accel/accel.sh@22 -- # case "$var" in 00:04:44.156 05:56:52 -- accel/accel.sh@20 -- # IFS=: 00:04:44.157 05:56:52 -- accel/accel.sh@20 -- # read -r var val 00:04:44.157 05:56:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:44.157 05:56:52 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:04:44.157 05:56:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:44.157 00:04:44.157 real 0m3.545s 00:04:44.157 user 0m2.599s 00:04:44.157 sys 0m0.960s 00:04:44.157 05:56:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.157 05:56:52 -- common/autotest_common.sh@10 -- # set +x 00:04:44.157 ************************************ 00:04:44.157 END TEST accel_compare 00:04:44.157 ************************************ 00:04:44.418 05:56:52 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:44.418 05:56:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:44.418 05:56:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.418 05:56:52 -- common/autotest_common.sh@10 -- # set +x 00:04:44.418 ************************************ 00:04:44.418 START TEST accel_xor 00:04:44.418 ************************************ 00:04:44.418 05:56:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:04:44.418 05:56:52 -- accel/accel.sh@16 -- # local accel_opc 00:04:44.418 05:56:52 -- accel/accel.sh@17 -- # local accel_module 00:04:44.418 05:56:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:04:44.418 05:56:52 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DTMuyb -t 1 -w xor -y 00:04:44.418 [2024-05-13 05:56:52.521317] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:44.418 [2024-05-13 05:56:52.521687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:44.677 EAL: TSC is not safe to use in SMP mode 00:04:44.677 EAL: TSC is not invariant 00:04:44.677 [2024-05-13 05:56:52.948566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.937 [2024-05-13 05:56:53.067237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.937 05:56:53 -- accel/accel.sh@12 -- # build_accel_config 00:04:44.938 05:56:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:44.938 05:56:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:44.938 05:56:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:44.938 05:56:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:44.938 05:56:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:44.938 05:56:53 -- accel/accel.sh@41 -- # local IFS=, 00:04:44.938 05:56:53 -- accel/accel.sh@42 -- # jq -r . 00:04:46.318 05:56:54 -- accel/accel.sh@18 -- # out=' 00:04:46.318 SPDK Configuration: 00:04:46.318 Core mask: 0x1 00:04:46.318 00:04:46.318 Accel Perf Configuration: 00:04:46.318 Workload Type: xor 00:04:46.318 Source buffers: 2 00:04:46.318 Transfer size: 4096 bytes 00:04:46.318 Vector count 1 00:04:46.318 Module: software 00:04:46.318 Queue depth: 32 00:04:46.318 Allocate depth: 32 00:04:46.318 # threads/core: 1 00:04:46.318 Run time: 1 seconds 00:04:46.318 Verify: Yes 00:04:46.318 00:04:46.318 Running for 1 seconds... 00:04:46.318 00:04:46.318 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:46.318 ------------------------------------------------------------------------------------ 00:04:46.318 0,0 2138976/s 8355 MiB/s 0 0 00:04:46.318 ==================================================================================== 00:04:46.318 Total 2138976/s 8355 MiB/s 0 0' 00:04:46.318 05:56:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:46.318 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.318 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.318 05:56:54 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.xfuhlH -t 1 -w xor -y 00:04:46.318 [2024-05-13 05:56:54.275713] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:46.318 [2024-05-13 05:56:54.275971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:46.578 EAL: TSC is not safe to use in SMP mode 00:04:46.578 EAL: TSC is not invariant 00:04:46.578 [2024-05-13 05:56:54.703002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.578 [2024-05-13 05:56:54.819325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.578 05:56:54 -- accel/accel.sh@12 -- # build_accel_config 00:04:46.578 05:56:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:46.578 05:56:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.578 05:56:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.578 05:56:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:46.578 05:56:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:46.578 05:56:54 -- accel/accel.sh@41 -- # local IFS=, 00:04:46.578 05:56:54 -- accel/accel.sh@42 -- # jq -r . 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=0x1 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=xor 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=2 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=software 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@23 -- # accel_module=software 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=32 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=32 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=1 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val=Yes 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:46.578 05:56:54 -- accel/accel.sh@21 -- # val= 00:04:46.578 05:56:54 -- accel/accel.sh@22 -- # case "$var" in 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # IFS=: 00:04:46.578 05:56:54 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@21 -- # val= 00:04:47.959 05:56:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # IFS=: 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@21 -- # val= 00:04:47.959 05:56:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # IFS=: 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@21 -- # val= 00:04:47.959 05:56:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # IFS=: 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@21 -- # val= 00:04:47.959 05:56:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # IFS=: 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@21 -- # val= 00:04:47.959 05:56:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # IFS=: 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@21 -- # val= 00:04:47.959 05:56:56 -- accel/accel.sh@22 -- # case "$var" in 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # IFS=: 00:04:47.959 05:56:56 -- accel/accel.sh@20 -- # read -r var val 00:04:47.959 05:56:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:47.959 05:56:56 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:04:47.959 05:56:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:47.959 00:04:47.959 real 0m3.508s 00:04:47.959 user 0m2.585s 00:04:47.959 sys 0m0.935s 00:04:47.959 05:56:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.959 05:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.959 ************************************ 00:04:47.959 END TEST accel_xor 00:04:47.959 ************************************ 00:04:47.959 05:56:56 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:04:47.959 05:56:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:47.959 05:56:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.959 05:56:56 -- common/autotest_common.sh@10 -- # set +x 00:04:47.959 ************************************ 00:04:47.959 START TEST accel_xor 00:04:47.959 ************************************ 00:04:47.959 05:56:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:04:47.959 05:56:56 -- accel/accel.sh@16 -- # local accel_opc 00:04:47.959 05:56:56 -- accel/accel.sh@17 -- # local accel_module 00:04:47.959 05:56:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:04:47.959 05:56:56 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bEgVkr -t 1 -w xor -y -x 3 00:04:47.959 [2024-05-13 05:56:56.075081] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:47.959 [2024-05-13 05:56:56.075390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:48.219 EAL: TSC is not safe to use in SMP mode 00:04:48.219 EAL: TSC is not invariant 00:04:48.219 [2024-05-13 05:56:56.502415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.478 [2024-05-13 05:56:56.617278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.478 05:56:56 -- accel/accel.sh@12 -- # build_accel_config 00:04:48.478 05:56:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:48.478 05:56:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.478 05:56:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.478 05:56:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:48.478 05:56:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:48.478 05:56:56 -- accel/accel.sh@41 -- # local IFS=, 00:04:48.478 05:56:56 -- accel/accel.sh@42 -- # jq -r . 00:04:49.859 05:56:57 -- accel/accel.sh@18 -- # out=' 00:04:49.859 SPDK Configuration: 00:04:49.859 Core mask: 0x1 00:04:49.859 00:04:49.859 Accel Perf Configuration: 00:04:49.859 Workload Type: xor 00:04:49.859 Source buffers: 3 00:04:49.859 Transfer size: 4096 bytes 00:04:49.859 Vector count 1 00:04:49.859 Module: software 00:04:49.859 Queue depth: 32 00:04:49.859 Allocate depth: 32 00:04:49.859 # threads/core: 1 00:04:49.859 Run time: 1 seconds 00:04:49.859 Verify: Yes 00:04:49.859 00:04:49.859 Running for 1 seconds... 00:04:49.859 00:04:49.859 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:49.859 ------------------------------------------------------------------------------------ 00:04:49.859 0,0 1942496/s 7587 MiB/s 0 0 00:04:49.859 ==================================================================================== 00:04:49.859 Total 1942496/s 7587 MiB/s 0 0' 00:04:49.859 05:56:57 -- accel/accel.sh@20 -- # IFS=: 00:04:49.859 05:56:57 -- accel/accel.sh@20 -- # read -r var val 00:04:49.859 05:56:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:04:49.859 05:56:57 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.2cJwTv -t 1 -w xor -y -x 3 00:04:49.859 [2024-05-13 05:56:57.816709] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:49.859 [2024-05-13 05:56:57.816890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:50.118 EAL: TSC is not safe to use in SMP mode 00:04:50.118 EAL: TSC is not invariant 00:04:50.118 [2024-05-13 05:56:58.268626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.118 [2024-05-13 05:56:58.385090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.118 05:56:58 -- accel/accel.sh@12 -- # build_accel_config 00:04:50.118 05:56:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:50.119 05:56:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.119 05:56:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.119 05:56:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:50.119 05:56:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:50.119 05:56:58 -- accel/accel.sh@41 -- # local IFS=, 00:04:50.119 05:56:58 -- accel/accel.sh@42 -- # jq -r . 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=0x1 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=xor 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=3 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=software 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@23 -- # accel_module=software 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=32 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=32 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=1 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val=Yes 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:50.119 05:56:58 -- accel/accel.sh@21 -- # val= 00:04:50.119 05:56:58 -- accel/accel.sh@22 -- # case "$var" in 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # IFS=: 00:04:50.119 05:56:58 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@21 -- # val= 00:04:51.500 05:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # IFS=: 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@21 -- # val= 00:04:51.500 05:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # IFS=: 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@21 -- # val= 00:04:51.500 05:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # IFS=: 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@21 -- # val= 00:04:51.500 05:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # IFS=: 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@21 -- # val= 00:04:51.500 05:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # IFS=: 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@21 -- # val= 00:04:51.500 05:56:59 -- accel/accel.sh@22 -- # case "$var" in 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # IFS=: 00:04:51.500 05:56:59 -- accel/accel.sh@20 -- # read -r var val 00:04:51.500 05:56:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:51.500 05:56:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:04:51.500 05:56:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:51.500 00:04:51.500 real 0m3.523s 00:04:51.500 user 0m2.576s 00:04:51.500 sys 0m0.958s 00:04:51.500 05:56:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.500 05:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.500 ************************************ 00:04:51.500 END TEST accel_xor 00:04:51.500 ************************************ 00:04:51.500 05:56:59 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:04:51.500 05:56:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:51.500 05:56:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.500 05:56:59 -- common/autotest_common.sh@10 -- # set +x 00:04:51.500 ************************************ 00:04:51.500 START TEST accel_dif_verify 00:04:51.500 ************************************ 00:04:51.500 05:56:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:04:51.500 05:56:59 -- accel/accel.sh@16 -- # local accel_opc 00:04:51.500 05:56:59 -- accel/accel.sh@17 -- # local accel_module 00:04:51.500 05:56:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:04:51.500 05:56:59 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7m9Cnc -t 1 -w dif_verify 00:04:51.500 [2024-05-13 05:56:59.643527] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:51.500 [2024-05-13 05:56:59.643875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:51.760 EAL: TSC is not safe to use in SMP mode 00:04:51.760 EAL: TSC is not invariant 00:04:52.020 [2024-05-13 05:57:00.069744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.020 [2024-05-13 05:57:00.185553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.020 05:57:00 -- accel/accel.sh@12 -- # build_accel_config 00:04:52.020 05:57:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:52.020 05:57:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:52.020 05:57:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:52.020 05:57:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:52.020 05:57:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:52.020 05:57:00 -- accel/accel.sh@41 -- # local IFS=, 00:04:52.020 05:57:00 -- accel/accel.sh@42 -- # jq -r . 00:04:53.401 05:57:01 -- accel/accel.sh@18 -- # out=' 00:04:53.401 SPDK Configuration: 00:04:53.401 Core mask: 0x1 00:04:53.401 00:04:53.401 Accel Perf Configuration: 00:04:53.401 Workload Type: dif_verify 00:04:53.401 Vector size: 4096 bytes 00:04:53.401 Transfer size: 4096 bytes 00:04:53.401 Block size: 512 bytes 00:04:53.401 Metadata size: 8 bytes 00:04:53.401 Vector count 1 00:04:53.401 Module: software 00:04:53.401 Queue depth: 32 00:04:53.401 Allocate depth: 32 00:04:53.401 # threads/core: 1 00:04:53.401 Run time: 1 seconds 00:04:53.401 Verify: No 00:04:53.401 00:04:53.401 Running for 1 seconds... 00:04:53.401 00:04:53.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:53.401 ------------------------------------------------------------------------------------ 00:04:53.401 0,0 1322560/s 5246 MiB/s 0 0 00:04:53.401 ==================================================================================== 00:04:53.401 Total 1322560/s 5166 MiB/s 0 0' 00:04:53.401 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.401 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.401 05:57:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:04:53.401 05:57:01 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.MajTUX -t 1 -w dif_verify 00:04:53.401 [2024-05-13 05:57:01.394790] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:53.401 [2024-05-13 05:57:01.395148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:53.661 EAL: TSC is not safe to use in SMP mode 00:04:53.661 EAL: TSC is not invariant 00:04:53.661 [2024-05-13 05:57:01.823019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.661 [2024-05-13 05:57:01.936067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.661 05:57:01 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.661 05:57:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:53.661 05:57:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.661 05:57:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.661 05:57:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:53.661 05:57:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:53.661 05:57:01 -- accel/accel.sh@41 -- # local IFS=, 00:04:53.661 05:57:01 -- accel/accel.sh@42 -- # jq -r . 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=0x1 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=dif_verify 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=software 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@23 -- # accel_module=software 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=32 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=32 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=1 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.661 05:57:01 -- accel/accel.sh@21 -- # val=No 00:04:53.661 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.661 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.921 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.921 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.921 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.921 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:53.921 05:57:01 -- accel/accel.sh@21 -- # val= 00:04:53.921 05:57:01 -- accel/accel.sh@22 -- # case "$var" in 00:04:53.921 05:57:01 -- accel/accel.sh@20 -- # IFS=: 00:04:53.921 05:57:01 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@21 -- # val= 00:04:54.858 05:57:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # IFS=: 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@21 -- # val= 00:04:54.858 05:57:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # IFS=: 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@21 -- # val= 00:04:54.858 05:57:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # IFS=: 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@21 -- # val= 00:04:54.858 05:57:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # IFS=: 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@21 -- # val= 00:04:54.858 05:57:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # IFS=: 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@21 -- # val= 00:04:54.858 05:57:03 -- accel/accel.sh@22 -- # case "$var" in 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # IFS=: 00:04:54.858 05:57:03 -- accel/accel.sh@20 -- # read -r var val 00:04:54.858 05:57:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:54.858 05:57:03 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:04:54.858 05:57:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.858 00:04:54.858 real 0m3.508s 00:04:54.858 user 0m2.542s 00:04:54.858 sys 0m0.982s 00:04:54.858 05:57:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.858 05:57:03 -- common/autotest_common.sh@10 -- # set +x 00:04:54.858 ************************************ 00:04:54.858 END TEST accel_dif_verify 00:04:54.858 ************************************ 00:04:55.117 05:57:03 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:04:55.117 05:57:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:55.117 05:57:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.117 05:57:03 -- common/autotest_common.sh@10 -- # set +x 00:04:55.117 ************************************ 00:04:55.117 START TEST accel_dif_generate 00:04:55.117 ************************************ 00:04:55.117 05:57:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:04:55.117 05:57:03 -- accel/accel.sh@16 -- # local accel_opc 00:04:55.117 05:57:03 -- accel/accel.sh@17 -- # local accel_module 00:04:55.118 05:57:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:04:55.118 05:57:03 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.REl1Zg -t 1 -w dif_generate 00:04:55.118 [2024-05-13 05:57:03.202556] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:55.118 [2024-05-13 05:57:03.202901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:55.376 EAL: TSC is not safe to use in SMP mode 00:04:55.376 EAL: TSC is not invariant 00:04:55.376 [2024-05-13 05:57:03.640961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.634 [2024-05-13 05:57:03.766678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.634 05:57:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:55.634 05:57:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:55.634 05:57:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.634 05:57:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.634 05:57:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:55.634 05:57:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:55.634 05:57:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:55.634 05:57:03 -- accel/accel.sh@42 -- # jq -r . 00:04:57.011 05:57:04 -- accel/accel.sh@18 -- # out=' 00:04:57.011 SPDK Configuration: 00:04:57.011 Core mask: 0x1 00:04:57.011 00:04:57.011 Accel Perf Configuration: 00:04:57.011 Workload Type: dif_generate 00:04:57.011 Vector size: 4096 bytes 00:04:57.011 Transfer size: 4096 bytes 00:04:57.011 Block size: 512 bytes 00:04:57.011 Metadata size: 8 bytes 00:04:57.011 Vector count 1 00:04:57.011 Module: software 00:04:57.011 Queue depth: 32 00:04:57.011 Allocate depth: 32 00:04:57.011 # threads/core: 1 00:04:57.011 Run time: 1 seconds 00:04:57.011 Verify: No 00:04:57.011 00:04:57.011 Running for 1 seconds... 00:04:57.011 00:04:57.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:57.011 ------------------------------------------------------------------------------------ 00:04:57.011 0,0 1495840/s 5934 MiB/s 0 0 00:04:57.011 ==================================================================================== 00:04:57.011 Total 1495840/s 5843 MiB/s 0 0' 00:04:57.011 05:57:04 -- accel/accel.sh@20 -- # IFS=: 00:04:57.011 05:57:04 -- accel/accel.sh@20 -- # read -r var val 00:04:57.011 05:57:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:04:57.011 05:57:04 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.oFSdTT -t 1 -w dif_generate 00:04:57.011 [2024-05-13 05:57:04.978634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:57.011 [2024-05-13 05:57:04.978957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:57.270 EAL: TSC is not safe to use in SMP mode 00:04:57.270 EAL: TSC is not invariant 00:04:57.270 [2024-05-13 05:57:05.409395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.270 [2024-05-13 05:57:05.527237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.270 05:57:05 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.270 05:57:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.270 05:57:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.270 05:57:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.270 05:57:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.270 05:57:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.270 05:57:05 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.270 05:57:05 -- accel/accel.sh@42 -- # jq -r . 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=0x1 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=dif_generate 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val='512 bytes' 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val='8 bytes' 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=software 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@23 -- # accel_module=software 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=32 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=32 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=1 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val=No 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:57.270 05:57:05 -- accel/accel.sh@21 -- # val= 00:04:57.270 05:57:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # IFS=: 00:04:57.270 05:57:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@21 -- # val= 00:04:58.647 05:57:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # IFS=: 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@21 -- # val= 00:04:58.647 05:57:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # IFS=: 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@21 -- # val= 00:04:58.647 05:57:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # IFS=: 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@21 -- # val= 00:04:58.647 05:57:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # IFS=: 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@21 -- # val= 00:04:58.647 05:57:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # IFS=: 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@21 -- # val= 00:04:58.647 05:57:06 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # IFS=: 00:04:58.647 05:57:06 -- accel/accel.sh@20 -- # read -r var val 00:04:58.647 05:57:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:04:58.647 05:57:06 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:04:58.647 05:57:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.647 00:04:58.647 real 0m3.543s 00:04:58.647 user 0m2.593s 00:04:58.647 sys 0m0.965s 00:04:58.647 05:57:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.647 05:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.647 ************************************ 00:04:58.647 END TEST accel_dif_generate 00:04:58.647 ************************************ 00:04:58.647 05:57:06 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:04:58.647 05:57:06 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:58.647 05:57:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.647 05:57:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.647 ************************************ 00:04:58.647 START TEST accel_dif_generate_copy 00:04:58.647 ************************************ 00:04:58.647 05:57:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:04:58.647 05:57:06 -- accel/accel.sh@16 -- # local accel_opc 00:04:58.647 05:57:06 -- accel/accel.sh@17 -- # local accel_module 00:04:58.647 05:57:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:04:58.647 05:57:06 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.5zqrzr -t 1 -w dif_generate_copy 00:04:58.647 [2024-05-13 05:57:06.796287] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:58.647 [2024-05-13 05:57:06.796661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:04:59.214 EAL: TSC is not safe to use in SMP mode 00:04:59.214 EAL: TSC is not invariant 00:04:59.214 [2024-05-13 05:57:07.227878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.214 [2024-05-13 05:57:07.342636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.214 05:57:07 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.214 05:57:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:59.214 05:57:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.214 05:57:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.214 05:57:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:59.214 05:57:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:59.214 05:57:07 -- accel/accel.sh@41 -- # local IFS=, 00:04:59.214 05:57:07 -- accel/accel.sh@42 -- # jq -r . 00:05:00.593 05:57:08 -- accel/accel.sh@18 -- # out=' 00:05:00.593 SPDK Configuration: 00:05:00.593 Core mask: 0x1 00:05:00.593 00:05:00.593 Accel Perf Configuration: 00:05:00.593 Workload Type: dif_generate_copy 00:05:00.593 Vector size: 4096 bytes 00:05:00.593 Transfer size: 4096 bytes 00:05:00.593 Vector count 1 00:05:00.593 Module: software 00:05:00.593 Queue depth: 32 00:05:00.593 Allocate depth: 32 00:05:00.593 # threads/core: 1 00:05:00.593 Run time: 1 seconds 00:05:00.593 Verify: No 00:05:00.593 00:05:00.593 Running for 1 seconds... 00:05:00.593 00:05:00.593 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:00.593 ------------------------------------------------------------------------------------ 00:05:00.593 0,0 1238784/s 4914 MiB/s 0 0 00:05:00.593 ==================================================================================== 00:05:00.593 Total 1238784/s 4839 MiB/s 0 0' 00:05:00.593 05:57:08 -- accel/accel.sh@20 -- # IFS=: 00:05:00.593 05:57:08 -- accel/accel.sh@20 -- # read -r var val 00:05:00.593 05:57:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:00.593 05:57:08 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.AJqcAF -t 1 -w dif_generate_copy 00:05:00.593 [2024-05-13 05:57:08.551926] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:00.593 [2024-05-13 05:57:08.552302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:00.852 EAL: TSC is not safe to use in SMP mode 00:05:00.852 EAL: TSC is not invariant 00:05:00.852 [2024-05-13 05:57:08.975319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.852 [2024-05-13 05:57:09.089884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.852 05:57:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.852 05:57:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:00.852 05:57:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.852 05:57:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.852 05:57:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:00.852 05:57:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:00.852 05:57:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:00.852 05:57:09 -- accel/accel.sh@42 -- # jq -r . 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=0x1 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=software 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=32 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=32 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=1 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val=No 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:00.852 05:57:09 -- accel/accel.sh@21 -- # val= 00:05:00.852 05:57:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # IFS=: 00:05:00.852 05:57:09 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@21 -- # val= 00:05:02.238 05:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # IFS=: 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@21 -- # val= 00:05:02.238 05:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # IFS=: 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@21 -- # val= 00:05:02.238 05:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # IFS=: 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@21 -- # val= 00:05:02.238 05:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # IFS=: 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@21 -- # val= 00:05:02.238 05:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # IFS=: 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@21 -- # val= 00:05:02.238 05:57:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # IFS=: 00:05:02.238 05:57:10 -- accel/accel.sh@20 -- # read -r var val 00:05:02.238 05:57:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:02.238 05:57:10 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:02.238 05:57:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.238 00:05:02.238 real 0m3.509s 00:05:02.238 user 0m2.554s 00:05:02.238 sys 0m0.969s 00:05:02.238 05:57:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.238 05:57:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.238 ************************************ 00:05:02.238 END TEST accel_dif_generate_copy 00:05:02.238 ************************************ 00:05:02.238 05:57:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:02.238 05:57:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:02.238 05:57:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:02.238 05:57:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.238 05:57:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.238 ************************************ 00:05:02.238 START TEST accel_comp 00:05:02.238 ************************************ 00:05:02.238 05:57:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:02.238 05:57:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:02.238 05:57:10 -- accel/accel.sh@17 -- # local accel_module 00:05:02.238 05:57:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:02.238 05:57:10 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.jgNxFr -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:02.238 [2024-05-13 05:57:10.354500] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:02.238 [2024-05-13 05:57:10.354873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:02.498 EAL: TSC is not safe to use in SMP mode 00:05:02.498 EAL: TSC is not invariant 00:05:02.498 [2024-05-13 05:57:10.784136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.757 [2024-05-13 05:57:10.902324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.757 05:57:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:02.757 05:57:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:02.757 05:57:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.757 05:57:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.757 05:57:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:02.757 05:57:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:02.757 05:57:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:02.757 05:57:10 -- accel/accel.sh@42 -- # jq -r . 00:05:04.137 05:57:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:04.137 00:05:04.137 SPDK Configuration: 00:05:04.137 Core mask: 0x1 00:05:04.137 00:05:04.137 Accel Perf Configuration: 00:05:04.137 Workload Type: compress 00:05:04.137 Transfer size: 4096 bytes 00:05:04.137 Vector count 1 00:05:04.137 Module: software 00:05:04.137 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.137 Queue depth: 32 00:05:04.137 Allocate depth: 32 00:05:04.137 # threads/core: 1 00:05:04.137 Run time: 1 seconds 00:05:04.137 Verify: No 00:05:04.137 00:05:04.137 Running for 1 seconds... 00:05:04.137 00:05:04.137 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:04.137 ------------------------------------------------------------------------------------ 00:05:04.137 0,0 64480/s 268 MiB/s 0 0 00:05:04.137 ==================================================================================== 00:05:04.138 Total 64480/s 251 MiB/s 0 0' 00:05:04.138 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.138 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.138 05:57:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.138 05:57:12 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.thTpey -t 1 -w compress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.138 [2024-05-13 05:57:12.116459] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:04.138 [2024-05-13 05:57:12.116839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:04.398 EAL: TSC is not safe to use in SMP mode 00:05:04.398 EAL: TSC is not invariant 00:05:04.398 [2024-05-13 05:57:12.543403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.398 [2024-05-13 05:57:12.658024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.398 05:57:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.398 05:57:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:04.398 05:57:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.398 05:57:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.398 05:57:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:04.398 05:57:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:04.398 05:57:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:04.398 05:57:12 -- accel/accel.sh@42 -- # jq -r . 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=0x1 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=compress 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=software 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@23 -- # accel_module=software 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=32 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=32 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=1 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val=No 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:04.398 05:57:12 -- accel/accel.sh@21 -- # val= 00:05:04.398 05:57:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # IFS=: 00:05:04.398 05:57:12 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@21 -- # val= 00:05:05.777 05:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # IFS=: 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@21 -- # val= 00:05:05.777 05:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # IFS=: 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@21 -- # val= 00:05:05.777 05:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # IFS=: 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@21 -- # val= 00:05:05.777 05:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # IFS=: 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@21 -- # val= 00:05:05.777 05:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # IFS=: 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@21 -- # val= 00:05:05.777 05:57:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # IFS=: 00:05:05.777 05:57:13 -- accel/accel.sh@20 -- # read -r var val 00:05:05.777 05:57:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:05.777 05:57:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:05.777 05:57:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.777 00:05:05.777 real 0m3.525s 00:05:05.777 user 0m2.582s 00:05:05.777 sys 0m0.953s 00:05:05.777 05:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.777 05:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.777 ************************************ 00:05:05.777 END TEST accel_comp 00:05:05.777 ************************************ 00:05:05.777 05:57:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.777 05:57:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:05.777 05:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.777 05:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.777 ************************************ 00:05:05.777 START TEST accel_decomp 00:05:05.777 ************************************ 00:05:05.777 05:57:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.777 05:57:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.777 05:57:13 -- accel/accel.sh@17 -- # local accel_module 00:05:05.777 05:57:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.777 05:57:13 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8C0UUy -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:05.777 [2024-05-13 05:57:13.939648] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:05.777 [2024-05-13 05:57:13.939960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:06.344 EAL: TSC is not safe to use in SMP mode 00:05:06.344 EAL: TSC is not invariant 00:05:06.344 [2024-05-13 05:57:14.367768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.344 [2024-05-13 05:57:14.480545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.344 05:57:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.344 05:57:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.344 05:57:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.344 05:57:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.344 05:57:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.344 05:57:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.344 05:57:14 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.344 05:57:14 -- accel/accel.sh@42 -- # jq -r . 00:05:07.719 05:57:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:07.719 00:05:07.719 SPDK Configuration: 00:05:07.719 Core mask: 0x1 00:05:07.719 00:05:07.719 Accel Perf Configuration: 00:05:07.719 Workload Type: decompress 00:05:07.719 Transfer size: 4096 bytes 00:05:07.719 Vector count 1 00:05:07.719 Module: software 00:05:07.719 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:07.719 Queue depth: 32 00:05:07.719 Allocate depth: 32 00:05:07.719 # threads/core: 1 00:05:07.719 Run time: 1 seconds 00:05:07.719 Verify: Yes 00:05:07.719 00:05:07.719 Running for 1 seconds... 00:05:07.719 00:05:07.719 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:07.719 ------------------------------------------------------------------------------------ 00:05:07.719 0,0 88736/s 163 MiB/s 0 0 00:05:07.719 ==================================================================================== 00:05:07.719 Total 88736/s 346 MiB/s 0 0' 00:05:07.719 05:57:15 -- accel/accel.sh@20 -- # IFS=: 00:05:07.719 05:57:15 -- accel/accel.sh@20 -- # read -r var val 00:05:07.719 05:57:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:07.719 05:57:15 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zCcCno -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:07.719 [2024-05-13 05:57:15.687780] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:07.719 [2024-05-13 05:57:15.687967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:07.977 EAL: TSC is not safe to use in SMP mode 00:05:07.977 EAL: TSC is not invariant 00:05:07.977 [2024-05-13 05:57:16.132386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.977 [2024-05-13 05:57:16.264373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.977 05:57:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.977 05:57:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.977 05:57:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.977 05:57:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.977 05:57:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.977 05:57:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.977 05:57:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.977 05:57:16 -- accel/accel.sh@42 -- # jq -r . 00:05:07.977 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:07.977 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.977 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:07.977 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.234 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.234 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.234 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.234 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.234 05:57:16 -- accel/accel.sh@21 -- # val=0x1 00:05:08.234 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.234 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.234 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.234 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.234 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.234 05:57:16 -- accel/accel.sh@21 -- # val=decompress 00:05:08.234 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.234 05:57:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.234 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val=software 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@23 -- # accel_module=software 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val=32 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val=32 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val=1 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val=Yes 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:08.235 05:57:16 -- accel/accel.sh@21 -- # val= 00:05:08.235 05:57:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # IFS=: 00:05:08.235 05:57:16 -- accel/accel.sh@20 -- # read -r var val 00:05:09.171 05:57:17 -- accel/accel.sh@21 -- # val= 00:05:09.171 05:57:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.171 05:57:17 -- accel/accel.sh@20 -- # IFS=: 00:05:09.171 05:57:17 -- accel/accel.sh@20 -- # read -r var val 00:05:09.171 05:57:17 -- accel/accel.sh@21 -- # val= 00:05:09.171 05:57:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.171 05:57:17 -- accel/accel.sh@20 -- # IFS=: 00:05:09.171 05:57:17 -- accel/accel.sh@20 -- # read -r var val 00:05:09.171 05:57:17 -- accel/accel.sh@21 -- # val= 00:05:09.171 05:57:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.171 05:57:17 -- accel/accel.sh@20 -- # IFS=: 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # read -r var val 00:05:09.172 05:57:17 -- accel/accel.sh@21 -- # val= 00:05:09.172 05:57:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # IFS=: 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # read -r var val 00:05:09.172 05:57:17 -- accel/accel.sh@21 -- # val= 00:05:09.172 05:57:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # IFS=: 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # read -r var val 00:05:09.172 05:57:17 -- accel/accel.sh@21 -- # val= 00:05:09.172 05:57:17 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # IFS=: 00:05:09.172 05:57:17 -- accel/accel.sh@20 -- # read -r var val 00:05:09.172 05:57:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:09.172 05:57:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:09.172 05:57:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.172 00:05:09.172 real 0m3.540s 00:05:09.172 user 0m2.563s 00:05:09.172 sys 0m0.987s 00:05:09.172 05:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.172 05:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.172 ************************************ 00:05:09.172 END TEST accel_decomp 00:05:09.172 ************************************ 00:05:09.431 05:57:17 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:09.431 05:57:17 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:09.431 05:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.431 05:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.431 ************************************ 00:05:09.431 START TEST accel_decmop_full 00:05:09.431 ************************************ 00:05:09.431 05:57:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:09.431 05:57:17 -- accel/accel.sh@16 -- # local accel_opc 00:05:09.431 05:57:17 -- accel/accel.sh@17 -- # local accel_module 00:05:09.431 05:57:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:09.431 05:57:17 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7GrkIe -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:09.431 [2024-05-13 05:57:17.524128] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:09.431 [2024-05-13 05:57:17.524484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:09.690 EAL: TSC is not safe to use in SMP mode 00:05:09.690 EAL: TSC is not invariant 00:05:09.690 [2024-05-13 05:57:17.960029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.949 [2024-05-13 05:57:18.074468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.949 05:57:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.949 05:57:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.949 05:57:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.949 05:57:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.949 05:57:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.949 05:57:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.949 05:57:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.949 05:57:18 -- accel/accel.sh@42 -- # jq -r . 00:05:11.356 05:57:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:11.356 00:05:11.356 SPDK Configuration: 00:05:11.356 Core mask: 0x1 00:05:11.356 00:05:11.356 Accel Perf Configuration: 00:05:11.356 Workload Type: decompress 00:05:11.356 Transfer size: 111250 bytes 00:05:11.356 Vector count 1 00:05:11.356 Module: software 00:05:11.356 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:11.356 Queue depth: 32 00:05:11.356 Allocate depth: 32 00:05:11.356 # threads/core: 1 00:05:11.356 Run time: 1 seconds 00:05:11.356 Verify: Yes 00:05:11.356 00:05:11.356 Running for 1 seconds... 00:05:11.356 00:05:11.356 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:11.356 ------------------------------------------------------------------------------------ 00:05:11.356 0,0 5088/s 210 MiB/s 0 0 00:05:11.356 ==================================================================================== 00:05:11.356 Total 5088/s 539 MiB/s 0 0' 00:05:11.356 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.356 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.356 05:57:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:11.356 05:57:19 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.9zSBbO -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:11.356 [2024-05-13 05:57:19.290490] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:11.356 [2024-05-13 05:57:19.290868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:11.614 EAL: TSC is not safe to use in SMP mode 00:05:11.614 EAL: TSC is not invariant 00:05:11.614 [2024-05-13 05:57:19.718784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.614 [2024-05-13 05:57:19.847158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.614 05:57:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.614 05:57:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.614 05:57:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.614 05:57:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.614 05:57:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.614 05:57:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.614 05:57:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.614 05:57:19 -- accel/accel.sh@42 -- # jq -r . 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=0x1 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=decompress 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=software 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=32 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=32 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val=1 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.614 05:57:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:11.614 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.614 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.615 05:57:19 -- accel/accel.sh@21 -- # val=Yes 00:05:11.615 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.615 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.615 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.615 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.615 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.615 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.615 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:11.615 05:57:19 -- accel/accel.sh@21 -- # val= 00:05:11.615 05:57:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.615 05:57:19 -- accel/accel.sh@20 -- # IFS=: 00:05:11.615 05:57:19 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@21 -- # val= 00:05:12.988 05:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # IFS=: 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@21 -- # val= 00:05:12.988 05:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # IFS=: 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@21 -- # val= 00:05:12.988 05:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # IFS=: 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@21 -- # val= 00:05:12.988 05:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # IFS=: 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@21 -- # val= 00:05:12.988 05:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # IFS=: 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@21 -- # val= 00:05:12.988 05:57:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # IFS=: 00:05:12.988 05:57:21 -- accel/accel.sh@20 -- # read -r var val 00:05:12.988 05:57:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:12.988 05:57:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:12.988 05:57:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.988 00:05:12.988 real 0m3.552s 00:05:12.988 user 0m2.576s 00:05:12.988 sys 0m0.989s 00:05:12.988 05:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.988 05:57:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.988 ************************************ 00:05:12.988 END TEST accel_decmop_full 00:05:12.988 ************************************ 00:05:12.988 05:57:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:12.988 05:57:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:12.988 05:57:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.988 05:57:21 -- common/autotest_common.sh@10 -- # set +x 00:05:12.988 ************************************ 00:05:12.988 START TEST accel_decomp_mcore 00:05:12.988 ************************************ 00:05:12.988 05:57:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:12.988 05:57:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:12.988 05:57:21 -- accel/accel.sh@17 -- # local accel_module 00:05:12.988 05:57:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:12.988 05:57:21 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FQgxq7 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:12.988 [2024-05-13 05:57:21.124379] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:12.988 [2024-05-13 05:57:21.124737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:13.246 EAL: TSC is not safe to use in SMP mode 00:05:13.246 EAL: TSC is not invariant 00:05:13.503 [2024-05-13 05:57:21.564091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.503 [2024-05-13 05:57:21.683810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.503 [2024-05-13 05:57:21.684141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.503 [2024-05-13 05:57:21.683982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.503 [2024-05-13 05:57:21.684138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.503 05:57:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.503 05:57:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.503 05:57:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.503 05:57:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.503 05:57:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.503 05:57:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.503 05:57:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.503 05:57:21 -- accel/accel.sh@42 -- # jq -r . 00:05:14.878 05:57:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:14.878 00:05:14.879 SPDK Configuration: 00:05:14.879 Core mask: 0xf 00:05:14.879 00:05:14.879 Accel Perf Configuration: 00:05:14.879 Workload Type: decompress 00:05:14.879 Transfer size: 4096 bytes 00:05:14.879 Vector count 1 00:05:14.879 Module: software 00:05:14.879 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:14.879 Queue depth: 32 00:05:14.879 Allocate depth: 32 00:05:14.879 # threads/core: 1 00:05:14.879 Run time: 1 seconds 00:05:14.879 Verify: Yes 00:05:14.879 00:05:14.879 Running for 1 seconds... 00:05:14.879 00:05:14.879 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:14.879 ------------------------------------------------------------------------------------ 00:05:14.879 0,0 88704/s 163 MiB/s 0 0 00:05:14.879 3,0 87360/s 160 MiB/s 0 0 00:05:14.879 2,0 86368/s 159 MiB/s 0 0 00:05:14.879 1,0 87520/s 161 MiB/s 0 0 00:05:14.879 ==================================================================================== 00:05:14.879 Total 349952/s 1367 MiB/s 0 0' 00:05:14.879 05:57:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:14.879 05:57:22 -- accel/accel.sh@20 -- # IFS=: 00:05:14.879 05:57:22 -- accel/accel.sh@20 -- # read -r var val 00:05:14.879 05:57:22 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.5U8PV8 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:14.879 [2024-05-13 05:57:22.893153] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:14.879 [2024-05-13 05:57:22.893301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:15.447 EAL: TSC is not safe to use in SMP mode 00:05:15.447 EAL: TSC is not invariant 00:05:15.447 [2024-05-13 05:57:23.619446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.447 [2024-05-13 05:57:23.724481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.447 [2024-05-13 05:57:23.724582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.447 [2024-05-13 05:57:23.724743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.447 [2024-05-13 05:57:23.724740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.447 05:57:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.447 05:57:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.447 05:57:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.447 05:57:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.447 05:57:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.447 05:57:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.447 05:57:23 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.447 05:57:23 -- accel/accel.sh@42 -- # jq -r . 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.447 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.447 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.447 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val=0xf 00:05:15.447 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.447 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.447 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.447 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.447 05:57:23 -- accel/accel.sh@21 -- # val=decompress 00:05:15.448 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.448 05:57:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.448 05:57:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:15.448 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.448 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.448 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.448 05:57:23 -- accel/accel.sh@21 -- # val=software 00:05:15.448 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.448 05:57:23 -- accel/accel.sh@23 -- # accel_module=software 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.448 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val=32 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val=32 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val=1 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val=Yes 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:15.707 05:57:23 -- accel/accel.sh@21 -- # val= 00:05:15.707 05:57:23 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # IFS=: 00:05:15.707 05:57:23 -- accel/accel.sh@20 -- # read -r var val 00:05:16.645 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@21 -- # val= 00:05:16.646 05:57:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # IFS=: 00:05:16.646 05:57:24 -- accel/accel.sh@20 -- # read -r var val 00:05:16.646 05:57:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:16.646 05:57:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:16.646 05:57:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.646 00:05:16.646 real 0m3.822s 00:05:16.646 user 0m8.995s 00:05:16.646 sys 0m1.290s 00:05:16.646 05:57:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.646 05:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.646 ************************************ 00:05:16.646 END TEST accel_decomp_mcore 00:05:16.646 ************************************ 00:05:16.904 05:57:24 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.904 05:57:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:16.904 05:57:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.904 05:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:16.904 ************************************ 00:05:16.904 START TEST accel_decomp_full_mcore 00:05:16.904 ************************************ 00:05:16.904 05:57:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.904 05:57:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.904 05:57:24 -- accel/accel.sh@17 -- # local accel_module 00:05:16.904 05:57:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.904 05:57:24 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.bcWzjJ -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:16.904 [2024-05-13 05:57:24.995413] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:16.904 [2024-05-13 05:57:24.995779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:17.163 EAL: TSC is not safe to use in SMP mode 00:05:17.163 EAL: TSC is not invariant 00:05:17.163 [2024-05-13 05:57:25.428254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.422 [2024-05-13 05:57:25.520844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.422 [2024-05-13 05:57:25.521097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.422 [2024-05-13 05:57:25.520948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.422 [2024-05-13 05:57:25.521099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.422 05:57:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.422 05:57:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.422 05:57:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.422 05:57:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.422 05:57:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.422 05:57:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.422 05:57:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.422 05:57:25 -- accel/accel.sh@42 -- # jq -r . 00:05:18.803 05:57:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:18.803 00:05:18.803 SPDK Configuration: 00:05:18.803 Core mask: 0xf 00:05:18.803 00:05:18.803 Accel Perf Configuration: 00:05:18.803 Workload Type: decompress 00:05:18.803 Transfer size: 111250 bytes 00:05:18.803 Vector count 1 00:05:18.803 Module: software 00:05:18.803 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:18.803 Queue depth: 32 00:05:18.803 Allocate depth: 32 00:05:18.803 # threads/core: 1 00:05:18.803 Run time: 1 seconds 00:05:18.803 Verify: Yes 00:05:18.803 00:05:18.803 Running for 1 seconds... 00:05:18.803 00:05:18.803 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:18.803 ------------------------------------------------------------------------------------ 00:05:18.803 0,0 5024/s 207 MiB/s 0 0 00:05:18.803 3,0 4960/s 204 MiB/s 0 0 00:05:18.803 2,0 4992/s 206 MiB/s 0 0 00:05:18.803 1,0 4864/s 200 MiB/s 0 0 00:05:18.803 ==================================================================================== 00:05:18.803 Total 19840/s 2104 MiB/s 0 0' 00:05:18.803 05:57:26 -- accel/accel.sh@20 -- # IFS=: 00:05:18.803 05:57:26 -- accel/accel.sh@20 -- # read -r var val 00:05:18.803 05:57:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.803 05:57:26 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.jB9uH1 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.803 [2024-05-13 05:57:26.679173] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:18.803 [2024-05-13 05:57:26.679536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:18.803 EAL: TSC is not safe to use in SMP mode 00:05:18.803 EAL: TSC is not invariant 00:05:19.070 [2024-05-13 05:57:27.116167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.070 [2024-05-13 05:57:27.207466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.070 [2024-05-13 05:57:27.207812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.070 [2024-05-13 05:57:27.207649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.070 [2024-05-13 05:57:27.207814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.070 05:57:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.070 05:57:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.070 05:57:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.070 05:57:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.070 05:57:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.070 05:57:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.070 05:57:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.070 05:57:27 -- accel/accel.sh@42 -- # jq -r . 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=0xf 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=decompress 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=software 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@23 -- # accel_module=software 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=32 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=32 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=1 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val=Yes 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:19.070 05:57:27 -- accel/accel.sh@21 -- # val= 00:05:19.070 05:57:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # IFS=: 00:05:19.070 05:57:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@21 -- # val= 00:05:20.463 05:57:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # IFS=: 00:05:20.463 05:57:28 -- accel/accel.sh@20 -- # read -r var val 00:05:20.463 05:57:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:20.463 05:57:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:20.463 05:57:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.463 00:05:20.463 real 0m3.365s 00:05:20.463 user 0m8.762s 00:05:20.463 sys 0m0.932s 00:05:20.463 05:57:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.463 05:57:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.463 ************************************ 00:05:20.463 END TEST accel_decomp_full_mcore 00:05:20.463 ************************************ 00:05:20.463 05:57:28 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:20.463 05:57:28 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:20.463 05:57:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.463 05:57:28 -- common/autotest_common.sh@10 -- # set +x 00:05:20.463 ************************************ 00:05:20.463 START TEST accel_decomp_mthread 00:05:20.463 ************************************ 00:05:20.463 05:57:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:20.463 05:57:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.463 05:57:28 -- accel/accel.sh@17 -- # local accel_module 00:05:20.463 05:57:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:20.463 05:57:28 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8v2h20 -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:20.463 [2024-05-13 05:57:28.393848] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:20.463 [2024-05-13 05:57:28.394223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:20.721 EAL: TSC is not safe to use in SMP mode 00:05:20.721 EAL: TSC is not invariant 00:05:20.721 [2024-05-13 05:57:28.818618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.721 [2024-05-13 05:57:28.908549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.721 05:57:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.721 05:57:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.721 05:57:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.721 05:57:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.721 05:57:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.721 05:57:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.721 05:57:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.721 05:57:28 -- accel/accel.sh@42 -- # jq -r . 00:05:22.100 05:57:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:22.100 00:05:22.100 SPDK Configuration: 00:05:22.100 Core mask: 0x1 00:05:22.100 00:05:22.100 Accel Perf Configuration: 00:05:22.100 Workload Type: decompress 00:05:22.100 Transfer size: 4096 bytes 00:05:22.100 Vector count 1 00:05:22.100 Module: software 00:05:22.100 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:22.100 Queue depth: 32 00:05:22.100 Allocate depth: 32 00:05:22.100 # threads/core: 2 00:05:22.100 Run time: 1 seconds 00:05:22.100 Verify: Yes 00:05:22.100 00:05:22.100 Running for 1 seconds... 00:05:22.100 00:05:22.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:22.100 ------------------------------------------------------------------------------------ 00:05:22.100 0,1 46752/s 86 MiB/s 0 0 00:05:22.100 0,0 46688/s 86 MiB/s 0 0 00:05:22.100 ==================================================================================== 00:05:22.100 Total 93440/s 365 MiB/s 0 0' 00:05:22.100 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.100 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.100 05:57:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:22.100 05:57:30 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.MMMxjk -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:22.100 [2024-05-13 05:57:30.053817] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:22.100 [2024-05-13 05:57:30.054149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:22.360 EAL: TSC is not safe to use in SMP mode 00:05:22.360 EAL: TSC is not invariant 00:05:22.360 [2024-05-13 05:57:30.480356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.360 [2024-05-13 05:57:30.566095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.360 05:57:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.360 05:57:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.360 05:57:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.360 05:57:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.360 05:57:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.360 05:57:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.360 05:57:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.360 05:57:30 -- accel/accel.sh@42 -- # jq -r . 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=0x1 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=decompress 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=software 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=32 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=32 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=2 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val=Yes 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:22.360 05:57:30 -- accel/accel.sh@21 -- # val= 00:05:22.360 05:57:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # IFS=: 00:05:22.360 05:57:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.738 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.738 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.738 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.738 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.738 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.738 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.738 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.738 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.738 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.738 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.738 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.739 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.739 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.739 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.739 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.739 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.739 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.739 05:57:31 -- accel/accel.sh@21 -- # val= 00:05:23.739 05:57:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # IFS=: 00:05:23.739 05:57:31 -- accel/accel.sh@20 -- # read -r var val 00:05:23.739 05:57:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:23.739 05:57:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:23.739 05:57:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.739 00:05:23.739 real 0m3.318s 00:05:23.739 user 0m2.390s 00:05:23.739 sys 0m0.941s 00:05:23.739 05:57:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.739 05:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:23.739 ************************************ 00:05:23.739 END TEST accel_decomp_mthread 00:05:23.739 ************************************ 00:05:23.739 05:57:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.739 05:57:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:23.739 05:57:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.739 05:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:23.739 ************************************ 00:05:23.739 START TEST accel_deomp_full_mthread 00:05:23.739 ************************************ 00:05:23.739 05:57:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.739 05:57:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.739 05:57:31 -- accel/accel.sh@17 -- # local accel_module 00:05:23.739 05:57:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.739 05:57:31 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.VMFf3E -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:23.739 [2024-05-13 05:57:31.759447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:23.739 [2024-05-13 05:57:31.759692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:23.998 EAL: TSC is not safe to use in SMP mode 00:05:23.998 EAL: TSC is not invariant 00:05:23.998 [2024-05-13 05:57:32.185151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.998 [2024-05-13 05:57:32.271681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.998 05:57:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.998 05:57:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.998 05:57:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.998 05:57:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.998 05:57:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.998 05:57:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.998 05:57:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.998 05:57:32 -- accel/accel.sh@42 -- # jq -r . 00:05:25.376 05:57:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:25.376 00:05:25.376 SPDK Configuration: 00:05:25.376 Core mask: 0x1 00:05:25.376 00:05:25.376 Accel Perf Configuration: 00:05:25.376 Workload Type: decompress 00:05:25.376 Transfer size: 111250 bytes 00:05:25.376 Vector count 1 00:05:25.376 Module: software 00:05:25.376 File Name: /usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:25.376 Queue depth: 32 00:05:25.376 Allocate depth: 32 00:05:25.376 # threads/core: 2 00:05:25.376 Run time: 1 seconds 00:05:25.376 Verify: Yes 00:05:25.376 00:05:25.376 Running for 1 seconds... 00:05:25.376 00:05:25.376 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:25.376 ------------------------------------------------------------------------------------ 00:05:25.376 0,1 2688/s 111 MiB/s 0 0 00:05:25.376 0,0 2656/s 109 MiB/s 0 0 00:05:25.376 ==================================================================================== 00:05:25.376 Total 5344/s 566 MiB/s 0 0' 00:05:25.376 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.376 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.376 05:57:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:25.376 05:57:33 -- accel/accel.sh@12 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.s5wrag -t 1 -w decompress -l /usr/home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:25.376 [2024-05-13 05:57:33.438721] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:25.376 [2024-05-13 05:57:33.439047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:25.634 EAL: TSC is not safe to use in SMP mode 00:05:25.634 EAL: TSC is not invariant 00:05:25.634 [2024-05-13 05:57:33.860740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.892 [2024-05-13 05:57:33.946962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.892 05:57:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.892 05:57:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.892 05:57:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.892 05:57:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.892 05:57:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.892 05:57:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.892 05:57:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.892 05:57:33 -- accel/accel.sh@42 -- # jq -r . 00:05:25.892 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.892 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.892 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=0x1 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=decompress 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=software 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=/usr/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=32 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=32 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=2 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val=Yes 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:25.893 05:57:33 -- accel/accel.sh@21 -- # val= 00:05:25.893 05:57:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # IFS=: 00:05:25.893 05:57:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@21 -- # val= 00:05:26.861 05:57:35 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # IFS=: 00:05:26.861 05:57:35 -- accel/accel.sh@20 -- # read -r var val 00:05:26.861 05:57:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:26.861 05:57:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:26.861 05:57:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.861 00:05:26.861 real 0m3.350s 00:05:26.861 user 0m2.422s 00:05:26.861 sys 0m0.931s 00:05:26.861 05:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.861 05:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:26.861 ************************************ 00:05:26.861 END TEST accel_deomp_full_mthread 00:05:26.861 ************************************ 00:05:26.861 05:57:35 -- accel/accel.sh@116 -- # [[ n == y ]] 00:05:26.861 05:57:35 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.2n1OR7 00:05:26.861 05:57:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:26.861 05:57:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.861 05:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:26.861 ************************************ 00:05:26.861 START TEST accel_dif_functional_tests 00:05:26.861 ************************************ 00:05:26.861 05:57:35 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.2n1OR7 00:05:27.139 [2024-05-13 05:57:35.150378] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:27.139 [2024-05-13 05:57:35.150738] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:27.400 EAL: TSC is not safe to use in SMP mode 00:05:27.400 EAL: TSC is not invariant 00:05:27.400 [2024-05-13 05:57:35.579441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.400 [2024-05-13 05:57:35.667267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.400 [2024-05-13 05:57:35.667133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.400 [2024-05-13 05:57:35.667270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.400 05:57:35 -- accel/accel.sh@129 -- # build_accel_config 00:05:27.400 05:57:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.400 05:57:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.400 05:57:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.400 05:57:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.400 05:57:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.400 05:57:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.400 05:57:35 -- accel/accel.sh@42 -- # jq -r . 00:05:27.400 00:05:27.400 00:05:27.400 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.400 http://cunit.sourceforge.net/ 00:05:27.400 00:05:27.400 00:05:27.400 Suite: accel_dif 00:05:27.400 Test: verify: DIF generated, GUARD check ...passed 00:05:27.400 Test: verify: DIF generated, APPTAG check ...passed 00:05:27.400 Test: verify: DIF generated, REFTAG check ...passed 00:05:27.400 Test: verify: DIF not generated, GUARD check ...[2024-05-13 05:57:35.688319] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:27.400 passed 00:05:27.400 Test: verify: DIF not generated, APPTAG check ...passed 00:05:27.400 Test: verify: DIF not generated, REFTAG check ...passed 00:05:27.400 Test: verify: APPTAG correct, APPTAG check ...[2024-05-13 05:57:35.688379] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:27.400 [2024-05-13 05:57:35.688403] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:27.400 [2024-05-13 05:57:35.688432] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:27.400 [2024-05-13 05:57:35.688445] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:27.400 [2024-05-13 05:57:35.688496] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:27.400 passed 00:05:27.400 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:27.400 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:27.400 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:27.400 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:27.400 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:27.400 Test: generate copy: DIF generated, GUARD check ...[2024-05-13 05:57:35.688520] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:27.400 [2024-05-13 05:57:35.688591] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:27.400 passed 00:05:27.400 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:27.400 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:27.400 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:27.400 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:27.400 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:27.400 Test: generate copy: iovecs-len validate ...[2024-05-13 05:57:35.688759] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:27.400 passed 00:05:27.400 Test: generate copy: buffer alignment validate ...passed 00:05:27.400 00:05:27.400 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.400 suites 1 1 n/a 0 0 00:05:27.400 tests 20 20 20 0 0 00:05:27.400 asserts 204 204 204 0 n/a 00:05:27.400 00:05:27.400 Elapsed time = 0.000 seconds 00:05:27.659 00:05:27.659 real 0m0.684s 00:05:27.659 user 0m0.346s 00:05:27.659 sys 0m0.470s 00:05:27.659 05:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.659 05:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:27.659 ************************************ 00:05:27.659 END TEST accel_dif_functional_tests 00:05:27.659 ************************************ 00:05:27.659 05:57:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.659 05:57:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.659 05:57:35 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.659 00:05:27.659 real 1m13.592s 00:05:27.659 user 1m4.862s 00:05:27.659 sys 0m22.217s 00:05:27.659 05:57:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.659 05:57:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.659 05:57:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.659 05:57:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.659 05:57:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.659 05:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:27.659 05:57:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.659 05:57:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.659 05:57:35 -- accel/accel.sh@42 -- # jq -r . 00:05:27.659 ************************************ 00:05:27.659 END TEST accel 00:05:27.659 05:57:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.659 ************************************ 00:05:27.659 05:57:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.659 05:57:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.659 05:57:35 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.659 05:57:35 -- accel/accel.sh@42 -- # jq -r . 00:05:27.659 05:57:35 -- accel/accel.sh@42 -- # jq -r . 00:05:27.659 05:57:35 -- spdk/autotest.sh@190 -- # run_test accel_rpc /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:27.660 05:57:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.660 05:57:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.660 05:57:35 -- common/autotest_common.sh@10 -- # set +x 00:05:27.660 ************************************ 00:05:27.660 START TEST accel_rpc 00:05:27.660 ************************************ 00:05:27.660 05:57:35 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:27.919 * Looking for test storage... 00:05:27.919 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/accel 00:05:27.919 05:57:36 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.919 05:57:36 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=46794 00:05:27.919 05:57:36 -- accel/accel_rpc.sh@15 -- # waitforlisten 46794 00:05:27.919 05:57:36 -- accel/accel_rpc.sh@13 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:27.919 05:57:36 -- common/autotest_common.sh@819 -- # '[' -z 46794 ']' 00:05:27.919 05:57:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.919 05:57:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:27.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.919 05:57:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.920 05:57:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:27.920 05:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:27.920 [2024-05-13 05:57:36.079308] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:27.920 [2024-05-13 05:57:36.079523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:28.489 EAL: TSC is not safe to use in SMP mode 00:05:28.489 EAL: TSC is not invariant 00:05:28.489 [2024-05-13 05:57:36.502084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.489 [2024-05-13 05:57:36.585983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.489 [2024-05-13 05:57:36.586081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.749 05:57:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:28.749 05:57:36 -- common/autotest_common.sh@852 -- # return 0 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:28.749 05:57:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.749 05:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.749 05:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.749 ************************************ 00:05:28.749 START TEST accel_assign_opcode 00:05:28.749 ************************************ 00:05:28.749 05:57:36 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:28.749 05:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.749 05:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.749 [2024-05-13 05:57:36.994350] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:28.749 05:57:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.749 05:57:36 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:28.749 05:57:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.749 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:28.749 [2024-05-13 05:57:37.006347] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:28.749 05:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.749 05:57:37 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:28.749 05:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.749 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:28.749 05:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:28.749 05:57:37 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:28.749 05:57:37 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:28.749 05:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:28.749 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:28.749 05:57:37 -- accel/accel_rpc.sh@42 -- # grep software 00:05:29.009 05:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:29.009 software 00:05:29.009 00:05:29.009 real 0m0.075s 00:05:29.009 user 0m0.009s 00:05:29.009 sys 0m0.020s 00:05:29.009 05:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.009 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:29.009 ************************************ 00:05:29.009 END TEST accel_assign_opcode 00:05:29.009 ************************************ 00:05:29.009 05:57:37 -- accel/accel_rpc.sh@55 -- # killprocess 46794 00:05:29.009 05:57:37 -- common/autotest_common.sh@926 -- # '[' -z 46794 ']' 00:05:29.009 05:57:37 -- common/autotest_common.sh@930 -- # kill -0 46794 00:05:29.009 05:57:37 -- common/autotest_common.sh@931 -- # uname 00:05:29.009 05:57:37 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:29.009 05:57:37 -- common/autotest_common.sh@934 -- # ps -c -o command 46794 00:05:29.009 05:57:37 -- common/autotest_common.sh@934 -- # tail -1 00:05:29.009 05:57:37 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:05:29.009 05:57:37 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:05:29.009 killing process with pid 46794 00:05:29.009 05:57:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46794' 00:05:29.009 05:57:37 -- common/autotest_common.sh@945 -- # kill 46794 00:05:29.009 05:57:37 -- common/autotest_common.sh@950 -- # wait 46794 00:05:29.267 00:05:29.267 real 0m1.413s 00:05:29.267 user 0m1.313s 00:05:29.267 sys 0m0.639s 00:05:29.267 05:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.268 ************************************ 00:05:29.268 END TEST accel_rpc 00:05:29.268 ************************************ 00:05:29.268 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:29.268 05:57:37 -- spdk/autotest.sh@191 -- # run_test app_cmdline /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:29.268 05:57:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.268 05:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.268 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:29.268 ************************************ 00:05:29.268 START TEST app_cmdline 00:05:29.268 ************************************ 00:05:29.268 05:57:37 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:29.268 * Looking for test storage... 00:05:29.268 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:05:29.268 05:57:37 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.268 05:57:37 -- app/cmdline.sh@17 -- # spdk_tgt_pid=46867 00:05:29.268 05:57:37 -- app/cmdline.sh@18 -- # waitforlisten 46867 00:05:29.268 05:57:37 -- common/autotest_common.sh@819 -- # '[' -z 46867 ']' 00:05:29.268 05:57:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.268 05:57:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.268 05:57:37 -- app/cmdline.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.268 05:57:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.268 05:57:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.268 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:29.268 [2024-05-13 05:57:37.534773] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:29.268 [2024-05-13 05:57:37.535010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:29.836 EAL: TSC is not safe to use in SMP mode 00:05:29.836 EAL: TSC is not invariant 00:05:29.836 [2024-05-13 05:57:37.985374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.836 [2024-05-13 05:57:38.070177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.836 [2024-05-13 05:57:38.070271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.403 05:57:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.403 05:57:38 -- common/autotest_common.sh@852 -- # return 0 00:05:30.403 05:57:38 -- app/cmdline.sh@20 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:30.403 { 00:05:30.403 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:05:30.403 "fields": { 00:05:30.403 "major": 24, 00:05:30.403 "minor": 1, 00:05:30.403 "patch": 1, 00:05:30.403 "suffix": "-pre", 00:05:30.403 "commit": "36faa8c31" 00:05:30.403 } 00:05:30.403 } 00:05:30.403 05:57:38 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:30.403 05:57:38 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:30.403 05:57:38 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:30.403 05:57:38 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:30.403 05:57:38 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:30.403 05:57:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:30.403 05:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.403 05:57:38 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:30.403 05:57:38 -- app/cmdline.sh@26 -- # sort 00:05:30.403 05:57:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:30.403 05:57:38 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.403 05:57:38 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.403 05:57:38 -- app/cmdline.sh@30 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.403 05:57:38 -- common/autotest_common.sh@640 -- # local es=0 00:05:30.403 05:57:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.403 05:57:38 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.403 05:57:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.403 05:57:38 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.403 05:57:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.403 05:57:38 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.403 05:57:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.403 05:57:38 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:30.403 05:57:38 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:30.403 05:57:38 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.661 request: 00:05:30.662 { 00:05:30.662 "method": "env_dpdk_get_mem_stats", 00:05:30.662 "req_id": 1 00:05:30.662 } 00:05:30.662 Got JSON-RPC error response 00:05:30.662 response: 00:05:30.662 { 00:05:30.662 "code": -32601, 00:05:30.662 "message": "Method not found" 00:05:30.662 } 00:05:30.662 05:57:38 -- common/autotest_common.sh@643 -- # es=1 00:05:30.662 05:57:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:30.662 05:57:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:30.662 05:57:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:30.662 05:57:38 -- app/cmdline.sh@1 -- # killprocess 46867 00:05:30.662 05:57:38 -- common/autotest_common.sh@926 -- # '[' -z 46867 ']' 00:05:30.662 05:57:38 -- common/autotest_common.sh@930 -- # kill -0 46867 00:05:30.662 05:57:38 -- common/autotest_common.sh@931 -- # uname 00:05:30.662 05:57:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:30.662 05:57:38 -- common/autotest_common.sh@934 -- # ps -c -o command 46867 00:05:30.662 05:57:38 -- common/autotest_common.sh@934 -- # tail -1 00:05:30.662 killing process with pid 46867 00:05:30.662 05:57:38 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:05:30.662 05:57:38 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:05:30.662 05:57:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46867' 00:05:30.662 05:57:38 -- common/autotest_common.sh@945 -- # kill 46867 00:05:30.662 05:57:38 -- common/autotest_common.sh@950 -- # wait 46867 00:05:30.920 00:05:30.920 real 0m1.614s 00:05:30.920 user 0m1.673s 00:05:30.920 sys 0m0.695s 00:05:30.920 05:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.920 05:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:30.920 ************************************ 00:05:30.920 END TEST app_cmdline 00:05:30.920 ************************************ 00:05:30.920 05:57:39 -- spdk/autotest.sh@192 -- # run_test version /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:30.920 05:57:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.920 05:57:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.920 05:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:30.920 ************************************ 00:05:30.920 START TEST version 00:05:30.920 ************************************ 00:05:30.920 05:57:39 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:30.920 * Looking for test storage... 00:05:30.920 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/app 00:05:30.920 05:57:39 -- app/version.sh@17 -- # get_header_version major 00:05:30.920 05:57:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:30.920 05:57:39 -- app/version.sh@14 -- # cut -f2 00:05:30.920 05:57:39 -- app/version.sh@14 -- # tr -d '"' 00:05:30.920 05:57:39 -- app/version.sh@17 -- # major=24 00:05:30.920 05:57:39 -- app/version.sh@18 -- # get_header_version minor 00:05:30.920 05:57:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:30.920 05:57:39 -- app/version.sh@14 -- # cut -f2 00:05:30.920 05:57:39 -- app/version.sh@14 -- # tr -d '"' 00:05:30.920 05:57:39 -- app/version.sh@18 -- # minor=1 00:05:30.920 05:57:39 -- app/version.sh@19 -- # get_header_version patch 00:05:30.920 05:57:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:30.920 05:57:39 -- app/version.sh@14 -- # cut -f2 00:05:30.920 05:57:39 -- app/version.sh@14 -- # tr -d '"' 00:05:30.920 05:57:39 -- app/version.sh@19 -- # patch=1 00:05:31.179 05:57:39 -- app/version.sh@20 -- # get_header_version suffix 00:05:31.179 05:57:39 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /usr/home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:31.179 05:57:39 -- app/version.sh@14 -- # cut -f2 00:05:31.179 05:57:39 -- app/version.sh@14 -- # tr -d '"' 00:05:31.179 05:57:39 -- app/version.sh@20 -- # suffix=-pre 00:05:31.179 05:57:39 -- app/version.sh@22 -- # version=24.1 00:05:31.179 05:57:39 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:31.179 05:57:39 -- app/version.sh@25 -- # version=24.1.1 00:05:31.179 05:57:39 -- app/version.sh@28 -- # version=24.1.1rc0 00:05:31.179 05:57:39 -- app/version.sh@30 -- # PYTHONPATH=:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python:/usr/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/usr/home/vagrant/spdk_repo/spdk/python 00:05:31.179 05:57:39 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:31.179 05:57:39 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:05:31.179 05:57:39 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:05:31.179 00:05:31.179 real 0m0.257s 00:05:31.179 user 0m0.135s 00:05:31.179 sys 0m0.219s 00:05:31.179 05:57:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.179 05:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.179 ************************************ 00:05:31.179 END TEST version 00:05:31.179 ************************************ 00:05:31.179 05:57:39 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:05:31.179 05:57:39 -- spdk/autotest.sh@195 -- # run_test blockdev_general /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:31.179 05:57:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.179 05:57:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.179 05:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.179 ************************************ 00:05:31.179 START TEST blockdev_general 00:05:31.179 ************************************ 00:05:31.179 05:57:39 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:31.453 * Looking for test storage... 00:05:31.453 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:05:31.453 05:57:39 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.453 05:57:39 -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.453 05:57:39 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:31.453 05:57:39 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:31.453 05:57:39 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:31.453 05:57:39 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:31.453 05:57:39 -- bdev/blockdev.sh@18 -- # : 00:05:31.453 05:57:39 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:05:31.453 05:57:39 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:05:31.453 05:57:39 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:05:31.453 05:57:39 -- bdev/blockdev.sh@672 -- # uname -s 00:05:31.453 05:57:39 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:05:31.453 05:57:39 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:05:31.453 05:57:39 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:05:31.453 05:57:39 -- bdev/blockdev.sh@681 -- # crypto_device= 00:05:31.453 05:57:39 -- bdev/blockdev.sh@682 -- # dek= 00:05:31.453 05:57:39 -- bdev/blockdev.sh@683 -- # env_ctx= 00:05:31.453 05:57:39 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:05:31.453 05:57:39 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:05:31.453 05:57:39 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:05:31.453 05:57:39 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:05:31.453 05:57:39 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:05:31.453 05:57:39 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=46992 00:05:31.453 05:57:39 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:31.453 05:57:39 -- bdev/blockdev.sh@47 -- # waitforlisten 46992 00:05:31.453 05:57:39 -- common/autotest_common.sh@819 -- # '[' -z 46992 ']' 00:05:31.453 05:57:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.453 05:57:39 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:31.453 05:57:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.453 05:57:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.453 05:57:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.453 05:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:31.453 [2024-05-13 05:57:39.517012] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:31.453 [2024-05-13 05:57:39.517228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:31.718 EAL: TSC is not safe to use in SMP mode 00:05:31.718 EAL: TSC is not invariant 00:05:31.718 [2024-05-13 05:57:39.938793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.977 [2024-05-13 05:57:40.023097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.977 [2024-05-13 05:57:40.023191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.236 05:57:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.236 05:57:40 -- common/autotest_common.sh@852 -- # return 0 00:05:32.236 05:57:40 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:05:32.236 05:57:40 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:05:32.236 05:57:40 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:05:32.236 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.237 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.237 [2024-05-13 05:57:40.447157] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:32.237 [2024-05-13 05:57:40.447208] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:32.237 00:05:32.237 [2024-05-13 05:57:40.455150] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:32.237 [2024-05-13 05:57:40.455179] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:32.237 00:05:32.237 Malloc0 00:05:32.237 Malloc1 00:05:32.237 Malloc2 00:05:32.237 Malloc3 00:05:32.237 Malloc4 00:05:32.237 Malloc5 00:05:32.237 Malloc6 00:05:32.237 Malloc7 00:05:32.237 Malloc8 00:05:32.496 Malloc9 00:05:32.496 [2024-05-13 05:57:40.543149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:32.496 [2024-05-13 05:57:40.543194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.496 [2024-05-13 05:57:40.543220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b99a700 00:05:32.496 [2024-05-13 05:57:40.543229] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.496 [2024-05-13 05:57:40.543509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.496 [2024-05-13 05:57:40.543539] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:32.496 TestPT 00:05:32.496 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.496 05:57:40 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:32.496 5000+0 records in 00:05:32.496 5000+0 records out 00:05:32.496 10240000 bytes transferred in 0.029985 secs (341503254 bytes/sec) 00:05:32.496 05:57:40 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:32.496 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.496 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.496 AIO0 00:05:32.496 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.496 05:57:40 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:05:32.496 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.496 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.496 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.496 05:57:40 -- bdev/blockdev.sh@738 -- # cat 00:05:32.496 05:57:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:05:32.496 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.496 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.496 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.496 05:57:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:05:32.496 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.496 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.496 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.496 05:57:40 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:32.496 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.496 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.496 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.496 05:57:40 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:05:32.496 05:57:40 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:05:32.496 05:57:40 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:05:32.496 05:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.496 05:57:40 -- common/autotest_common.sh@10 -- # set +x 00:05:32.756 05:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.756 05:57:40 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:05:32.756 05:57:40 -- bdev/blockdev.sh@747 -- # jq -r .name 00:05:32.757 05:57:40 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b52c0e99-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b52c0e99-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3ab8d31d-e7de-d658-909a-fd2faf59b9aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ab8d31d-e7de-d658-909a-fd2faf59b9aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "058c93da-8f46-c350-845a-e63d699eb66e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "058c93da-8f46-c350-845a-e63d699eb66e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c39fdef8-60e1-775b-9aad-bd41a4062596"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c39fdef8-60e1-775b-9aad-bd41a4062596",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b8ee273f-93bb-ac5f-8249-c263b3318f8d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b8ee273f-93bb-ac5f-8249-c263b3318f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "863081fd-5228-c55d-a8f4-2cbe64bf5ad6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "863081fd-5228-c55d-a8f4-2cbe64bf5ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4f658852-7401-f75f-aa05-6f749766442c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4f658852-7401-f75f-aa05-6f749766442c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "870171cc-4c2a-4754-b1c5-ae85b08fe8e4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "870171cc-4c2a-4754-b1c5-ae85b08fe8e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "12ceba97-c8a9-0b57-bfe5-c9fbc3b1311d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "12ceba97-c8a9-0b57-bfe5-c9fbc3b1311d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0efd95e8-0ff0-8c57-8314-c060ffc12669"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0efd95e8-0ff0-8c57-8314-c060ffc12669",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "6df0fa5a-2390-a758-83a1-ba72b3ead5a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6df0fa5a-2390-a758-83a1-ba72b3ead5a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2858958f-7e28-f85f-889b-10eb8f5bb8d0"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2858958f-7e28-f85f-889b-10eb8f5bb8d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b5398414-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b5398414-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b5398414-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b530effe-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b5322877-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b53ab45d-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b53ab45d-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b53ab45d-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b53360c3-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b534993d-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b53bec79-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b53bec79-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b53bec79-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b535d1c5-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b5370a37-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b54514ea-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b54514ea-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:05:32.757 05:57:40 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:05:32.757 05:57:40 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:05:32.757 05:57:40 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:05:32.757 05:57:40 -- bdev/blockdev.sh@752 -- # killprocess 46992 00:05:32.757 05:57:40 -- common/autotest_common.sh@926 -- # '[' -z 46992 ']' 00:05:32.757 05:57:40 -- common/autotest_common.sh@930 -- # kill -0 46992 00:05:32.757 05:57:40 -- common/autotest_common.sh@931 -- # uname 00:05:32.757 05:57:40 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:32.757 05:57:40 -- common/autotest_common.sh@934 -- # ps -c -o command 46992 00:05:32.757 05:57:40 -- common/autotest_common.sh@934 -- # tail -1 00:05:32.757 05:57:40 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:05:32.757 05:57:40 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:05:32.757 killing process with pid 46992 00:05:32.757 05:57:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46992' 00:05:32.757 05:57:40 -- common/autotest_common.sh@945 -- # kill 46992 00:05:32.757 05:57:40 -- common/autotest_common.sh@950 -- # wait 46992 00:05:33.017 05:57:41 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:33.017 05:57:41 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:33.017 05:57:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:33.017 05:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.017 05:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:33.017 ************************************ 00:05:33.017 START TEST bdev_hello_world 00:05:33.017 ************************************ 00:05:33.017 05:57:41 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:33.017 [2024-05-13 05:57:41.119645] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.017 [2024-05-13 05:57:41.119998] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:33.276 EAL: TSC is not safe to use in SMP mode 00:05:33.276 EAL: TSC is not invariant 00:05:33.276 [2024-05-13 05:57:41.539838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.535 [2024-05-13 05:57:41.628600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.535 [2024-05-13 05:57:41.683821] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:33.535 [2024-05-13 05:57:41.683856] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:33.535 [2024-05-13 05:57:41.691812] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:33.535 [2024-05-13 05:57:41.691833] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:33.535 [2024-05-13 05:57:41.699825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:33.535 [2024-05-13 05:57:41.699847] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:33.535 [2024-05-13 05:57:41.699856] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:33.535 [2024-05-13 05:57:41.747827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:33.535 [2024-05-13 05:57:41.747861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.535 [2024-05-13 05:57:41.747876] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b5dc800 00:05:33.535 [2024-05-13 05:57:41.747884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.535 [2024-05-13 05:57:41.748177] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.535 [2024-05-13 05:57:41.748203] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:33.793 [2024-05-13 05:57:41.849007] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:33.793 [2024-05-13 05:57:41.849077] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:33.794 [2024-05-13 05:57:41.849093] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:33.794 [2024-05-13 05:57:41.849108] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:33.794 [2024-05-13 05:57:41.849127] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:33.794 [2024-05-13 05:57:41.849144] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:33.794 [2024-05-13 05:57:41.849156] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:33.794 00:05:33.794 [2024-05-13 05:57:41.849166] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:33.794 00:05:33.794 real 0m0.923s 00:05:33.794 user 0m0.443s 00:05:33.794 sys 0m0.482s 00:05:33.794 05:57:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.794 05:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.794 ************************************ 00:05:33.794 END TEST bdev_hello_world 00:05:33.794 ************************************ 00:05:33.794 05:57:42 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:05:33.794 05:57:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:33.794 05:57:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.794 05:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.794 ************************************ 00:05:33.794 START TEST bdev_bounds 00:05:33.794 ************************************ 00:05:33.794 05:57:42 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:05:33.794 05:57:42 -- bdev/blockdev.sh@288 -- # bdevio_pid=47032 00:05:33.794 05:57:42 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.794 05:57:42 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:33.794 Process bdevio pid: 47032 00:05:33.794 05:57:42 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 47032' 00:05:33.794 05:57:42 -- bdev/blockdev.sh@291 -- # waitforlisten 47032 00:05:33.794 05:57:42 -- common/autotest_common.sh@819 -- # '[' -z 47032 ']' 00:05:33.794 05:57:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.794 05:57:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.794 05:57:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.794 05:57:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.794 05:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.794 [2024-05-13 05:57:42.094605] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.794 [2024-05-13 05:57:42.094845] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:05:34.360 EAL: TSC is not safe to use in SMP mode 00:05:34.360 EAL: TSC is not invariant 00:05:34.360 [2024-05-13 05:57:42.555805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.360 [2024-05-13 05:57:42.642858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.360 [2024-05-13 05:57:42.642774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.360 [2024-05-13 05:57:42.642861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.619 [2024-05-13 05:57:42.698523] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:34.619 [2024-05-13 05:57:42.698560] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:34.619 [2024-05-13 05:57:42.706512] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:34.619 [2024-05-13 05:57:42.706535] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:34.619 [2024-05-13 05:57:42.714523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:34.619 [2024-05-13 05:57:42.714546] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:34.619 [2024-05-13 05:57:42.714555] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:34.619 [2024-05-13 05:57:42.762529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:34.619 [2024-05-13 05:57:42.762561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:34.619 [2024-05-13 05:57:42.762592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82d52e800 00:05:34.619 [2024-05-13 05:57:42.762602] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:34.619 [2024-05-13 05:57:42.762897] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:34.619 [2024-05-13 05:57:42.762924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:34.878 05:57:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.878 05:57:42 -- common/autotest_common.sh@852 -- # return 0 00:05:34.878 05:57:42 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:34.878 I/O targets: 00:05:34.878 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:34.878 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:34.878 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:34.878 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:34.878 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:34.878 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:34.878 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:34.878 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:34.878 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:34.878 00:05:34.878 00:05:34.878 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.878 http://cunit.sourceforge.net/ 00:05:34.878 00:05:34.878 00:05:34.878 Suite: bdevio tests on: AIO0 00:05:34.878 Test: blockdev write read block ...passed 00:05:34.878 Test: blockdev write zeroes read block ...passed 00:05:34.878 Test: blockdev write zeroes read no split ...passed 00:05:34.878 Test: blockdev write zeroes read split ...passed 00:05:34.878 Test: blockdev write zeroes read split partial ...passed 00:05:34.878 Test: blockdev reset ...passed 00:05:34.878 Test: blockdev write read 8 blocks ...passed 00:05:34.878 Test: blockdev write read size > 128k ...passed 00:05:34.878 Test: blockdev write read invalid size ...passed 00:05:34.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.878 Test: blockdev write read max offset ...passed 00:05:34.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.878 Test: blockdev writev readv 8 blocks ...passed 00:05:34.878 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.878 Test: blockdev writev readv block ...passed 00:05:34.878 Test: blockdev writev readv size > 128k ...passed 00:05:34.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.878 Test: blockdev comparev and writev ...passed 00:05:34.878 Test: blockdev nvme passthru rw ...passed 00:05:34.878 Test: blockdev nvme passthru vendor specific ...passed 00:05:34.878 Test: blockdev nvme admin passthru ...passed 00:05:34.878 Test: blockdev copy ...passed 00:05:34.878 Suite: bdevio tests on: raid1 00:05:34.878 Test: blockdev write read block ...passed 00:05:34.878 Test: blockdev write zeroes read block ...passed 00:05:34.878 Test: blockdev write zeroes read no split ...passed 00:05:34.878 Test: blockdev write zeroes read split ...passed 00:05:34.878 Test: blockdev write zeroes read split partial ...passed 00:05:34.878 Test: blockdev reset ...passed 00:05:34.878 Test: blockdev write read 8 blocks ...passed 00:05:34.878 Test: blockdev write read size > 128k ...passed 00:05:34.878 Test: blockdev write read invalid size ...passed 00:05:34.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.878 Test: blockdev write read max offset ...passed 00:05:34.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.878 Test: blockdev writev readv 8 blocks ...passed 00:05:34.878 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.878 Test: blockdev writev readv block ...passed 00:05:34.878 Test: blockdev writev readv size > 128k ...passed 00:05:34.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.878 Test: blockdev comparev and writev ...passed 00:05:34.878 Test: blockdev nvme passthru rw ...passed 00:05:34.878 Test: blockdev nvme passthru vendor specific ...passed 00:05:34.878 Test: blockdev nvme admin passthru ...passed 00:05:34.878 Test: blockdev copy ...passed 00:05:34.878 Suite: bdevio tests on: concat0 00:05:34.878 Test: blockdev write read block ...passed 00:05:34.878 Test: blockdev write zeroes read block ...passed 00:05:34.878 Test: blockdev write zeroes read no split ...passed 00:05:34.878 Test: blockdev write zeroes read split ...passed 00:05:34.878 Test: blockdev write zeroes read split partial ...passed 00:05:34.878 Test: blockdev reset ...passed 00:05:34.878 Test: blockdev write read 8 blocks ...passed 00:05:34.878 Test: blockdev write read size > 128k ...passed 00:05:34.878 Test: blockdev write read invalid size ...passed 00:05:34.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.878 Test: blockdev write read max offset ...passed 00:05:34.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.878 Test: blockdev writev readv 8 blocks ...passed 00:05:34.878 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.878 Test: blockdev writev readv block ...passed 00:05:34.878 Test: blockdev writev readv size > 128k ...passed 00:05:34.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.878 Test: blockdev comparev and writev ...passed 00:05:34.878 Test: blockdev nvme passthru rw ...passed 00:05:34.878 Test: blockdev nvme passthru vendor specific ...passed 00:05:34.878 Test: blockdev nvme admin passthru ...passed 00:05:34.878 Test: blockdev copy ...passed 00:05:34.878 Suite: bdevio tests on: raid0 00:05:34.878 Test: blockdev write read block ...passed 00:05:34.878 Test: blockdev write zeroes read block ...passed 00:05:34.878 Test: blockdev write zeroes read no split ...passed 00:05:34.878 Test: blockdev write zeroes read split ...passed 00:05:34.878 Test: blockdev write zeroes read split partial ...passed 00:05:34.878 Test: blockdev reset ...passed 00:05:34.878 Test: blockdev write read 8 blocks ...passed 00:05:34.878 Test: blockdev write read size > 128k ...passed 00:05:34.878 Test: blockdev write read invalid size ...passed 00:05:34.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:34.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:34.878 Test: blockdev write read max offset ...passed 00:05:34.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:34.878 Test: blockdev writev readv 8 blocks ...passed 00:05:34.878 Test: blockdev writev readv 30 x 1block ...passed 00:05:34.878 Test: blockdev writev readv block ...passed 00:05:34.878 Test: blockdev writev readv size > 128k ...passed 00:05:34.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:34.878 Test: blockdev comparev and writev ...passed 00:05:34.878 Test: blockdev nvme passthru rw ...passed 00:05:34.878 Test: blockdev nvme passthru vendor specific ...passed 00:05:34.878 Test: blockdev nvme admin passthru ...passed 00:05:34.878 Test: blockdev copy ...passed 00:05:34.878 Suite: bdevio tests on: TestPT 00:05:34.879 Test: blockdev write read block ...passed 00:05:34.879 Test: blockdev write zeroes read block ...passed 00:05:34.879 Test: blockdev write zeroes read no split ...passed 00:05:34.879 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p7 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p6 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p5 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p4 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p3 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p2 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.139 Test: blockdev write read size > 128k ...passed 00:05:35.139 Test: blockdev write read invalid size ...passed 00:05:35.139 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.139 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.139 Test: blockdev write read max offset ...passed 00:05:35.139 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.139 Test: blockdev writev readv 8 blocks ...passed 00:05:35.139 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.139 Test: blockdev writev readv block ...passed 00:05:35.139 Test: blockdev writev readv size > 128k ...passed 00:05:35.139 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.139 Test: blockdev comparev and writev ...passed 00:05:35.139 Test: blockdev nvme passthru rw ...passed 00:05:35.139 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.139 Test: blockdev nvme admin passthru ...passed 00:05:35.139 Test: blockdev copy ...passed 00:05:35.139 Suite: bdevio tests on: Malloc2p1 00:05:35.139 Test: blockdev write read block ...passed 00:05:35.139 Test: blockdev write zeroes read block ...passed 00:05:35.139 Test: blockdev write zeroes read no split ...passed 00:05:35.139 Test: blockdev write zeroes read split ...passed 00:05:35.139 Test: blockdev write zeroes read split partial ...passed 00:05:35.139 Test: blockdev reset ...passed 00:05:35.139 Test: blockdev write read 8 blocks ...passed 00:05:35.140 Test: blockdev write read size > 128k ...passed 00:05:35.140 Test: blockdev write read invalid size ...passed 00:05:35.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.140 Test: blockdev write read max offset ...passed 00:05:35.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.140 Test: blockdev writev readv 8 blocks ...passed 00:05:35.140 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.140 Test: blockdev writev readv block ...passed 00:05:35.140 Test: blockdev writev readv size > 128k ...passed 00:05:35.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.140 Test: blockdev comparev and writev ...passed 00:05:35.140 Test: blockdev nvme passthru rw ...passed 00:05:35.140 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.140 Test: blockdev nvme admin passthru ...passed 00:05:35.140 Test: blockdev copy ...passed 00:05:35.140 Suite: bdevio tests on: Malloc2p0 00:05:35.140 Test: blockdev write read block ...passed 00:05:35.140 Test: blockdev write zeroes read block ...passed 00:05:35.140 Test: blockdev write zeroes read no split ...passed 00:05:35.140 Test: blockdev write zeroes read split ...passed 00:05:35.140 Test: blockdev write zeroes read split partial ...passed 00:05:35.140 Test: blockdev reset ...passed 00:05:35.140 Test: blockdev write read 8 blocks ...passed 00:05:35.140 Test: blockdev write read size > 128k ...passed 00:05:35.140 Test: blockdev write read invalid size ...passed 00:05:35.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.140 Test: blockdev write read max offset ...passed 00:05:35.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.140 Test: blockdev writev readv 8 blocks ...passed 00:05:35.140 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.140 Test: blockdev writev readv block ...passed 00:05:35.140 Test: blockdev writev readv size > 128k ...passed 00:05:35.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.140 Test: blockdev comparev and writev ...passed 00:05:35.140 Test: blockdev nvme passthru rw ...passed 00:05:35.140 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.140 Test: blockdev nvme admin passthru ...passed 00:05:35.140 Test: blockdev copy ...passed 00:05:35.140 Suite: bdevio tests on: Malloc1p1 00:05:35.140 Test: blockdev write read block ...passed 00:05:35.140 Test: blockdev write zeroes read block ...passed 00:05:35.140 Test: blockdev write zeroes read no split ...passed 00:05:35.140 Test: blockdev write zeroes read split ...passed 00:05:35.140 Test: blockdev write zeroes read split partial ...passed 00:05:35.140 Test: blockdev reset ...passed 00:05:35.140 Test: blockdev write read 8 blocks ...passed 00:05:35.140 Test: blockdev write read size > 128k ...passed 00:05:35.140 Test: blockdev write read invalid size ...passed 00:05:35.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.140 Test: blockdev write read max offset ...passed 00:05:35.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.140 Test: blockdev writev readv 8 blocks ...passed 00:05:35.140 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.140 Test: blockdev writev readv block ...passed 00:05:35.140 Test: blockdev writev readv size > 128k ...passed 00:05:35.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.140 Test: blockdev comparev and writev ...passed 00:05:35.140 Test: blockdev nvme passthru rw ...passed 00:05:35.140 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.140 Test: blockdev nvme admin passthru ...passed 00:05:35.140 Test: blockdev copy ...passed 00:05:35.140 Suite: bdevio tests on: Malloc1p0 00:05:35.140 Test: blockdev write read block ...passed 00:05:35.140 Test: blockdev write zeroes read block ...passed 00:05:35.140 Test: blockdev write zeroes read no split ...passed 00:05:35.140 Test: blockdev write zeroes read split ...passed 00:05:35.140 Test: blockdev write zeroes read split partial ...passed 00:05:35.140 Test: blockdev reset ...passed 00:05:35.140 Test: blockdev write read 8 blocks ...passed 00:05:35.140 Test: blockdev write read size > 128k ...passed 00:05:35.140 Test: blockdev write read invalid size ...passed 00:05:35.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.140 Test: blockdev write read max offset ...passed 00:05:35.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.140 Test: blockdev writev readv 8 blocks ...passed 00:05:35.140 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.140 Test: blockdev writev readv block ...passed 00:05:35.140 Test: blockdev writev readv size > 128k ...passed 00:05:35.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.140 Test: blockdev comparev and writev ...passed 00:05:35.140 Test: blockdev nvme passthru rw ...passed 00:05:35.140 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.140 Test: blockdev nvme admin passthru ...passed 00:05:35.140 Test: blockdev copy ...passed 00:05:35.140 Suite: bdevio tests on: Malloc0 00:05:35.140 Test: blockdev write read block ...passed 00:05:35.140 Test: blockdev write zeroes read block ...passed 00:05:35.140 Test: blockdev write zeroes read no split ...passed 00:05:35.140 Test: blockdev write zeroes read split ...passed 00:05:35.140 Test: blockdev write zeroes read split partial ...passed 00:05:35.140 Test: blockdev reset ...passed 00:05:35.140 Test: blockdev write read 8 blocks ...passed 00:05:35.140 Test: blockdev write read size > 128k ...passed 00:05:35.140 Test: blockdev write read invalid size ...passed 00:05:35.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:35.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:35.140 Test: blockdev write read max offset ...passed 00:05:35.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:35.140 Test: blockdev writev readv 8 blocks ...passed 00:05:35.140 Test: blockdev writev readv 30 x 1block ...passed 00:05:35.140 Test: blockdev writev readv block ...passed 00:05:35.140 Test: blockdev writev readv size > 128k ...passed 00:05:35.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:35.140 Test: blockdev comparev and writev ...passed 00:05:35.140 Test: blockdev nvme passthru rw ...passed 00:05:35.140 Test: blockdev nvme passthru vendor specific ...passed 00:05:35.140 Test: blockdev nvme admin passthru ...passed 00:05:35.140 Test: blockdev copy ...passed 00:05:35.140 00:05:35.140 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.140 suites 16 16 n/a 0 0 00:05:35.140 tests 368 368 368 0 0 00:05:35.140 asserts 2224 2224 2224 0 n/a 00:05:35.140 00:05:35.140 Elapsed time = 0.555 seconds 00:05:35.140 0 00:05:35.140 05:57:43 -- bdev/blockdev.sh@293 -- # killprocess 47032 00:05:35.140 05:57:43 -- common/autotest_common.sh@926 -- # '[' -z 47032 ']' 00:05:35.140 05:57:43 -- common/autotest_common.sh@930 -- # kill -0 47032 00:05:35.140 05:57:43 -- common/autotest_common.sh@931 -- # uname 00:05:35.140 05:57:43 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:05:35.140 05:57:43 -- common/autotest_common.sh@934 -- # ps -c -o command 47032 00:05:35.140 05:57:43 -- common/autotest_common.sh@934 -- # tail -1 00:05:35.140 05:57:43 -- common/autotest_common.sh@934 -- # process_name=bdevio 00:05:35.140 05:57:43 -- common/autotest_common.sh@936 -- # '[' bdevio = sudo ']' 00:05:35.140 killing process with pid 47032 00:05:35.140 05:57:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47032' 00:05:35.140 05:57:43 -- common/autotest_common.sh@945 -- # kill 47032 00:05:35.140 05:57:43 -- common/autotest_common.sh@950 -- # wait 47032 00:05:35.400 05:57:43 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:05:35.400 00:05:35.400 real 0m1.450s 00:05:35.400 user 0m2.659s 00:05:35.400 sys 0m0.679s 00:05:35.400 05:57:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.400 05:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.400 ************************************ 00:05:35.400 END TEST bdev_bounds 00:05:35.400 ************************************ 00:05:35.400 05:57:43 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.400 05:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.400 ************************************ 00:05:35.400 START TEST bdev_nbd 00:05:35.400 ************************************ 00:05:35.400 05:57:43 -- common/autotest_common.sh@1104 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:35.400 05:57:43 -- bdev/blockdev.sh@298 -- # uname -s 00:05:35.400 05:57:43 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:05:35.400 05:57:43 -- bdev/blockdev.sh@298 -- # return 0 00:05:35.400 00:05:35.400 real 0m0.006s 00:05:35.400 user 0m0.001s 00:05:35.400 sys 0m0.007s 00:05:35.400 05:57:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.400 05:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.400 ************************************ 00:05:35.400 END TEST bdev_nbd 00:05:35.400 ************************************ 00:05:35.400 05:57:43 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:05:35.400 05:57:43 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:05:35.400 05:57:43 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:05:35.400 05:57:43 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.400 05:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.400 ************************************ 00:05:35.400 START TEST bdev_fio 00:05:35.400 ************************************ 00:05:35.400 05:57:43 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:05:35.400 05:57:43 -- bdev/blockdev.sh@329 -- # local env_context 00:05:35.400 05:57:43 -- bdev/blockdev.sh@333 -- # pushd /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:05:35.400 /usr/home/vagrant/spdk_repo/spdk/test/bdev /usr/home/vagrant/spdk_repo/spdk 00:05:35.400 05:57:43 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:35.400 05:57:43 -- bdev/blockdev.sh@337 -- # echo '' 00:05:35.400 05:57:43 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:05:35.400 05:57:43 -- bdev/blockdev.sh@337 -- # env_context= 00:05:35.400 05:57:43 -- bdev/blockdev.sh@338 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1259 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:35.400 05:57:43 -- common/autotest_common.sh@1260 -- # local workload=verify 00:05:35.400 05:57:43 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:05:35.400 05:57:43 -- common/autotest_common.sh@1262 -- # local env_context= 00:05:35.400 05:57:43 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:05:35.400 05:57:43 -- common/autotest_common.sh@1265 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1278 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:35.400 05:57:43 -- common/autotest_common.sh@1280 -- # cat 00:05:35.400 05:57:43 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1293 -- # cat 00:05:35.400 05:57:43 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:05:35.400 05:57:43 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:05:35.967 05:57:44 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:35.967 05:57:44 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:05:35.967 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:05:35.967 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.967 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:05:35.968 05:57:44 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:05:35.968 05:57:44 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:05:35.968 05:57:44 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:35.968 05:57:44 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:35.968 05:57:44 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:35.968 05:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.968 05:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.968 ************************************ 00:05:35.968 START TEST bdev_fio_rw_verify 00:05:35.968 ************************************ 00:05:35.968 05:57:44 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:35.968 05:57:44 -- common/autotest_common.sh@1335 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:35.968 05:57:44 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:05:35.968 05:57:44 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:35.968 05:57:44 -- common/autotest_common.sh@1318 -- # local sanitizers 00:05:35.968 05:57:44 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:35.968 05:57:44 -- common/autotest_common.sh@1320 -- # shift 00:05:35.968 05:57:44 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:05:35.968 05:57:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # grep libasan 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:05:35.968 05:57:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:05:35.968 05:57:44 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:05:35.968 05:57:44 -- common/autotest_common.sh@1324 -- # asan_lib= 00:05:35.968 05:57:44 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:05:35.968 05:57:44 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:35.968 05:57:44 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:35.968 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.968 fio-3.35 00:05:35.968 Starting 16 threads 00:05:36.536 EAL: TSC is not safe to use in SMP mode 00:05:36.536 EAL: TSC is not invariant 00:05:48.768 00:05:48.768 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=102663: Mon May 13 05:57:55 2024 00:05:48.768 read: IOPS=312k, BW=1220MiB/s (1280MB/s)(11.9GiB/10002msec) 00:05:48.768 slat (nsec): min=203, max=1165.6M, avg=3076.66, stdev=750878.80 00:05:48.768 clat (nsec): min=611, max=1167.3M, avg=40659.63, stdev=2199066.29 00:05:48.768 lat (nsec): min=1429, max=1167.3M, avg=43736.29, stdev=2323731.52 00:05:48.768 clat percentiles (usec): 00:05:48.768 | 50.000th=[ 7], 99.000th=[ 783], 99.900th=[ 865], 99.990th=[94897], 00:05:48.768 | 99.999th=[94897] 00:05:48.768 write: IOPS=525k, BW=2050MiB/s (2150MB/s)(20.0GiB/10002msec); 0 zone resets 00:05:48.768 slat (nsec): min=418, max=3900.6M, avg=16166.56, stdev=1850668.80 00:05:48.768 clat (nsec): min=609, max=3902.3M, avg=78860.08, stdev=4809352.02 00:05:48.768 lat (usec): min=9, max=3902.4k, avg=95.03, stdev=5153.26 00:05:48.768 clat percentiles (usec): 00:05:48.768 | 50.000th=[ 39], 99.000th=[ 277], 99.900th=[ 873], 00:05:48.768 | 99.990th=[ 94897], 99.999th=[117965] 00:05:48.768 bw ( MiB/s): min= 694, max= 3396, per=100.00%, avg=2065.72, stdev=54.94, samples=290 00:05:48.768 iops : min=177848, max=869513, avg=528818.95, stdev=14063.66, samples=290 00:05:48.768 lat (nsec) : 750=0.01%, 1000=0.01% 00:05:48.768 lat (usec) : 2=0.47%, 4=15.80%, 10=20.58%, 20=15.65%, 50=27.03% 00:05:48.768 lat (usec) : 100=19.00%, 250=0.30%, 500=0.04%, 750=0.10%, 1000=0.96% 00:05:48.768 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:48.768 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:05:48.768 lat (msec) : >=2000=0.01% 00:05:48.768 cpu : usr=56.68%, sys=3.08%, ctx=1219984, majf=0, minf=709 00:05:48.768 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:48.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:48.768 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:48.768 issued rwts: total=3124897,5249708,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:48.768 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:48.768 00:05:48.768 Run status group 0 (all jobs): 00:05:48.768 READ: bw=1220MiB/s (1280MB/s), 1220MiB/s-1220MiB/s (1280MB/s-1280MB/s), io=11.9GiB (12.8GB), run=10002-10002msec 00:05:48.768 WRITE: bw=2050MiB/s (2150MB/s), 2050MiB/s-2050MiB/s (2150MB/s-2150MB/s), io=20.0GiB (21.5GB), run=10002-10002msec 00:05:48.768 00:05:48.768 real 0m11.836s 00:05:48.768 user 1m34.011s 00:05:48.768 sys 0m8.344s 00:05:48.768 05:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.768 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.768 ************************************ 00:05:48.768 END TEST bdev_fio_rw_verify 00:05:48.768 ************************************ 00:05:48.768 05:57:55 -- bdev/blockdev.sh@348 -- # rm -f 00:05:48.768 05:57:55 -- bdev/blockdev.sh@349 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:48.768 05:57:55 -- bdev/blockdev.sh@352 -- # fio_config_gen /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:48.768 05:57:55 -- common/autotest_common.sh@1259 -- # local config_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:48.768 05:57:55 -- common/autotest_common.sh@1260 -- # local workload=trim 00:05:48.768 05:57:55 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:05:48.768 05:57:55 -- common/autotest_common.sh@1262 -- # local env_context= 00:05:48.768 05:57:55 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:05:48.768 05:57:55 -- common/autotest_common.sh@1265 -- # '[' -e /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:48.768 05:57:55 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:05:48.768 05:57:55 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:05:48.768 05:57:55 -- common/autotest_common.sh@1278 -- # touch /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:48.768 05:57:55 -- common/autotest_common.sh@1280 -- # cat 00:05:48.768 05:57:55 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:05:48.768 05:57:55 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:05:48.768 05:57:55 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:05:48.768 05:57:55 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:48.769 05:57:55 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b52c0e99-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b52c0e99-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3ab8d31d-e7de-d658-909a-fd2faf59b9aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ab8d31d-e7de-d658-909a-fd2faf59b9aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "058c93da-8f46-c350-845a-e63d699eb66e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "058c93da-8f46-c350-845a-e63d699eb66e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c39fdef8-60e1-775b-9aad-bd41a4062596"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c39fdef8-60e1-775b-9aad-bd41a4062596",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b8ee273f-93bb-ac5f-8249-c263b3318f8d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b8ee273f-93bb-ac5f-8249-c263b3318f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "863081fd-5228-c55d-a8f4-2cbe64bf5ad6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "863081fd-5228-c55d-a8f4-2cbe64bf5ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4f658852-7401-f75f-aa05-6f749766442c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4f658852-7401-f75f-aa05-6f749766442c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "870171cc-4c2a-4754-b1c5-ae85b08fe8e4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "870171cc-4c2a-4754-b1c5-ae85b08fe8e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "12ceba97-c8a9-0b57-bfe5-c9fbc3b1311d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "12ceba97-c8a9-0b57-bfe5-c9fbc3b1311d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0efd95e8-0ff0-8c57-8314-c060ffc12669"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0efd95e8-0ff0-8c57-8314-c060ffc12669",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "6df0fa5a-2390-a758-83a1-ba72b3ead5a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6df0fa5a-2390-a758-83a1-ba72b3ead5a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2858958f-7e28-f85f-889b-10eb8f5bb8d0"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2858958f-7e28-f85f-889b-10eb8f5bb8d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b5398414-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b5398414-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b5398414-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b530effe-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b5322877-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b53ab45d-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b53ab45d-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b53ab45d-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b53360c3-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b534993d-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b53bec79-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b53bec79-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b53bec79-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b535d1c5-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b5370a37-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b54514ea-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b54514ea-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:05:48.769 05:57:55 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:05:48.769 Malloc1p0 00:05:48.769 Malloc1p1 00:05:48.769 Malloc2p0 00:05:48.769 Malloc2p1 00:05:48.769 Malloc2p2 00:05:48.769 Malloc2p3 00:05:48.769 Malloc2p4 00:05:48.769 Malloc2p5 00:05:48.769 Malloc2p6 00:05:48.769 Malloc2p7 00:05:48.769 TestPT 00:05:48.769 raid0 00:05:48.769 concat0 ]] 00:05:48.769 05:57:55 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b52c0e99-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b52c0e99-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3ab8d31d-e7de-d658-909a-fd2faf59b9aa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ab8d31d-e7de-d658-909a-fd2faf59b9aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "058c93da-8f46-c350-845a-e63d699eb66e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "058c93da-8f46-c350-845a-e63d699eb66e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c39fdef8-60e1-775b-9aad-bd41a4062596"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c39fdef8-60e1-775b-9aad-bd41a4062596",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "b8ee273f-93bb-ac5f-8249-c263b3318f8d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b8ee273f-93bb-ac5f-8249-c263b3318f8d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "863081fd-5228-c55d-a8f4-2cbe64bf5ad6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "863081fd-5228-c55d-a8f4-2cbe64bf5ad6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "4f658852-7401-f75f-aa05-6f749766442c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4f658852-7401-f75f-aa05-6f749766442c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "870171cc-4c2a-4754-b1c5-ae85b08fe8e4"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "870171cc-4c2a-4754-b1c5-ae85b08fe8e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "12ceba97-c8a9-0b57-bfe5-c9fbc3b1311d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "12ceba97-c8a9-0b57-bfe5-c9fbc3b1311d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0efd95e8-0ff0-8c57-8314-c060ffc12669"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0efd95e8-0ff0-8c57-8314-c060ffc12669",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "6df0fa5a-2390-a758-83a1-ba72b3ead5a3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6df0fa5a-2390-a758-83a1-ba72b3ead5a3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "2858958f-7e28-f85f-889b-10eb8f5bb8d0"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2858958f-7e28-f85f-889b-10eb8f5bb8d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b5398414-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b5398414-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b5398414-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b530effe-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b5322877-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b53ab45d-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b53ab45d-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b53ab45d-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b53360c3-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b534993d-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b53bec79-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b53bec79-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b53bec79-10ed-11ef-ba60-3508ead7bdda",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b535d1c5-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b5370a37-10ed-11ef-ba60-3508ead7bdda",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b54514ea-10ed-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b54514ea-10ed-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:05:48.770 05:57:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:48.770 05:57:55 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:05:48.770 05:57:55 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:05:48.770 05:57:55 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:48.770 05:57:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:48.770 05:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.770 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 ************************************ 00:05:48.770 START TEST bdev_fio_trim 00:05:48.770 ************************************ 00:05:48.770 05:57:55 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:48.771 05:57:55 -- common/autotest_common.sh@1335 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:48.771 05:57:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:05:48.771 05:57:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:48.771 05:57:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:05:48.771 05:57:55 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:48.771 05:57:55 -- common/autotest_common.sh@1320 -- # shift 00:05:48.771 05:57:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:05:48.771 05:57:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:05:48.771 05:57:55 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:48.771 05:57:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:05:48.771 05:57:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:05:48.771 05:57:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:05:48.771 05:57:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:05:48.771 05:57:56 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:05:48.771 05:57:56 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:48.771 05:57:56 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:05:48.771 05:57:56 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:05:48.771 05:57:56 -- common/autotest_common.sh@1324 -- # asan_lib= 00:05:48.771 05:57:56 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:05:48.771 05:57:56 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:48.771 05:57:56 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/usr/home/vagrant/spdk_repo/spdk/../output 00:05:48.771 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:48.771 fio-3.35 00:05:48.771 Starting 14 threads 00:05:48.771 EAL: TSC is not safe to use in SMP mode 00:05:48.771 EAL: TSC is not invariant 00:06:00.993 00:06:00.993 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=102682: Mon May 13 05:58:08 2024 00:06:00.993 write: IOPS=2986k, BW=11.4GiB/s (12.2GB/s)(114GiB/10002msec); 0 zone resets 00:06:00.993 slat (nsec): min=200, max=874028k, avg=1115.62, stdev=251823.22 00:06:00.993 clat (nsec): min=1111, max=1726.9M, avg=12701.46, stdev=1327442.63 00:06:00.993 lat (nsec): min=1589, max=1726.9M, avg=13817.08, stdev=1351116.58 00:06:00.993 clat percentiles (usec): 00:06:00.993 | 50.000th=[ 6], 99.000th=[ 12], 99.900th=[ 947], 99.990th=[ 963], 00:06:00.993 | 99.999th=[94897] 00:06:00.993 bw ( MiB/s): min= 3904, max=18693, per=100.00%, avg=11968.21, stdev=372.43, samples=255 00:06:00.993 iops : min=999626, max=4785537, avg=3063859.06, stdev=95341.10, samples=255 00:06:00.993 trim: IOPS=2986k, BW=11.4GiB/s (12.2GB/s)(114GiB/10002msec); 0 zone resets 00:06:00.993 slat (nsec): min=418, max=221966k, avg=1315.19, stdev=189297.74 00:06:00.993 clat (nsec): min=304, max=1900.5M, avg=9067.57, stdev=789968.04 00:06:00.994 lat (nsec): min=1404, max=1900.5M, avg=10382.77, stdev=812335.29 00:06:00.994 clat percentiles (usec): 00:06:00.994 | 50.000th=[ 6], 99.000th=[ 13], 99.900th=[ 21], 99.990th=[ 33], 00:06:00.994 | 99.999th=[94897] 00:06:00.994 bw ( MiB/s): min= 3904, max=18693, per=100.00%, avg=11968.22, stdev=372.43, samples=255 00:06:00.994 iops : min=999642, max=4785539, avg=3063860.96, stdev=95341.11, samples=255 00:06:00.994 lat (nsec) : 500=0.02%, 750=0.01%, 1000=0.03% 00:06:00.994 lat (usec) : 2=4.64%, 4=27.44%, 10=62.98%, 20=4.63%, 50=0.09% 00:06:00.994 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.18% 00:06:00.994 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:06:00.994 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:06:00.994 lat (msec) : 2000=0.01% 00:06:00.994 cpu : usr=63.71%, sys=4.70%, ctx=1301521, majf=0, minf=0 00:06:00.994 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:00.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:00.994 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:00.994 issued rwts: total=0,29867792,29867798,0 short=0,0,0,0 dropped=0,0,0,0 00:06:00.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:00.994 00:06:00.994 Run status group 0 (all jobs): 00:06:00.994 WRITE: bw=11.4GiB/s (12.2GB/s), 11.4GiB/s-11.4GiB/s (12.2GB/s-12.2GB/s), io=114GiB (122GB), run=10002-10002msec 00:06:00.994 TRIM: bw=11.4GiB/s (12.2GB/s), 11.4GiB/s-11.4GiB/s (12.2GB/s-12.2GB/s), io=114GiB (122GB), run=10002-10002msec 00:06:00.994 00:06:00.994 real 0m12.858s 00:06:00.994 user 1m35.376s 00:06:00.994 sys 0m9.657s 00:06:00.994 05:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.994 05:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.994 ************************************ 00:06:00.994 END TEST bdev_fio_trim 00:06:00.994 ************************************ 00:06:00.994 05:58:08 -- bdev/blockdev.sh@366 -- # rm -f 00:06:00.994 05:58:08 -- bdev/blockdev.sh@367 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:00.994 /usr/home/vagrant/spdk_repo/spdk 00:06:00.994 05:58:08 -- bdev/blockdev.sh@368 -- # popd 00:06:00.994 05:58:08 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:06:00.994 00:06:00.994 real 0m25.263s 00:06:00.994 user 3m9.530s 00:06:00.994 sys 0m18.402s 00:06:00.994 05:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.994 05:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.994 ************************************ 00:06:00.994 END TEST bdev_fio 00:06:00.994 ************************************ 00:06:00.994 05:58:08 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:00.994 05:58:08 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:00.994 05:58:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:06:00.994 05:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.994 05:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.994 ************************************ 00:06:00.994 START TEST bdev_verify 00:06:00.994 ************************************ 00:06:00.994 05:58:08 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:00.994 [2024-05-13 05:58:08.975339] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:00.994 [2024-05-13 05:58:08.975738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:01.565 EAL: TSC is not safe to use in SMP mode 00:06:01.565 EAL: TSC is not invariant 00:06:01.565 [2024-05-13 05:58:09.699166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.565 [2024-05-13 05:58:09.801245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.565 [2024-05-13 05:58:09.801234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.565 [2024-05-13 05:58:09.860278] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:01.565 [2024-05-13 05:58:09.860316] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:01.825 [2024-05-13 05:58:09.868263] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:01.825 [2024-05-13 05:58:09.868284] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:01.825 [2024-05-13 05:58:09.876277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.825 [2024-05-13 05:58:09.876295] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:01.825 [2024-05-13 05:58:09.876301] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:01.825 [2024-05-13 05:58:09.924277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.825 [2024-05-13 05:58:09.924334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.825 [2024-05-13 05:58:09.924345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cb80800 00:06:01.825 [2024-05-13 05:58:09.924351] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.825 [2024-05-13 05:58:09.924662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.825 [2024-05-13 05:58:09.924682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:01.825 Running I/O for 5 seconds... 00:06:07.132 00:06:07.132 Latency(us) 00:06:07.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:07.132 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x1000 00:06:07.132 Malloc0 : 5.02 11052.56 43.17 0.00 0.00 11567.29 175.83 21249.36 00:06:07.132 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x1000 length 0x1000 00:06:07.132 Malloc0 : 5.03 27.24 0.11 0.00 0.00 4688797.86 746.16 5030385.48 00:06:07.132 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x800 00:06:07.132 Malloc1p0 : 5.02 12456.83 48.66 0.00 0.00 10266.41 419.49 13252.29 00:06:07.132 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x800 length 0x800 00:06:07.132 Malloc1p0 : 5.01 16425.25 64.16 0.00 0.00 7784.86 351.66 10167.70 00:06:07.132 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x800 00:06:07.132 Malloc1p1 : 5.02 12456.44 48.66 0.00 0.00 10264.88 380.22 13080.92 00:06:07.132 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x800 length 0x800 00:06:07.132 Malloc1p1 : 5.01 16424.91 64.16 0.00 0.00 7783.88 317.74 9939.22 00:06:07.132 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x200 00:06:07.132 Malloc2p0 : 5.02 12456.16 48.66 0.00 0.00 10263.63 380.22 12909.56 00:06:07.132 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x200 length 0x200 00:06:07.132 Malloc2p0 : 5.01 16424.65 64.16 0.00 0.00 7782.95 330.24 9767.85 00:06:07.132 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x200 00:06:07.132 Malloc2p1 : 5.02 12455.87 48.66 0.00 0.00 10262.26 373.08 12566.82 00:06:07.132 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x200 length 0x200 00:06:07.132 Malloc2p1 : 5.01 16424.37 64.16 0.00 0.00 7782.08 315.96 9539.36 00:06:07.132 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x200 00:06:07.132 Malloc2p2 : 5.02 12455.55 48.65 0.00 0.00 10260.57 394.50 12281.21 00:06:07.132 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x200 length 0x200 00:06:07.132 Malloc2p2 : 5.01 16436.77 64.21 0.00 0.00 7777.82 312.39 9310.87 00:06:07.132 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.132 Verification LBA range: start 0x0 length 0x200 00:06:07.133 Malloc2p3 : 5.02 12455.22 48.65 0.00 0.00 10259.16 378.43 12109.85 00:06:07.133 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x200 length 0x200 00:06:07.133 Malloc2p3 : 5.01 16436.49 64.21 0.00 0.00 7776.98 315.96 9082.39 00:06:07.133 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x200 00:06:07.133 Malloc2p4 : 5.02 12454.88 48.65 0.00 0.00 10257.74 396.28 11824.24 00:06:07.133 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x200 length 0x200 00:06:07.133 Malloc2p4 : 5.01 16436.19 64.20 0.00 0.00 7776.02 323.10 9139.51 00:06:07.133 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x200 00:06:07.133 Malloc2p5 : 5.02 12454.42 48.65 0.00 0.00 10256.67 383.79 11481.51 00:06:07.133 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x200 length 0x200 00:06:07.133 Malloc2p5 : 5.01 16435.93 64.20 0.00 0.00 7775.10 326.67 9139.51 00:06:07.133 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x200 00:06:07.133 Malloc2p6 : 5.02 12454.11 48.65 0.00 0.00 10255.33 376.65 11367.26 00:06:07.133 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x200 length 0x200 00:06:07.133 Malloc2p6 : 5.01 16435.65 64.20 0.00 0.00 7774.26 328.45 9082.39 00:06:07.133 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x200 00:06:07.133 Malloc2p7 : 5.02 12453.75 48.65 0.00 0.00 10253.51 396.28 11024.53 00:06:07.133 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x200 length 0x200 00:06:07.133 Malloc2p7 : 5.01 16435.40 64.20 0.00 0.00 7773.35 326.67 9025.26 00:06:07.133 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x1000 00:06:07.133 TestPT : 5.02 12447.95 48.62 0.00 0.00 10254.05 369.51 10796.04 00:06:07.133 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x1000 length 0x1000 00:06:07.133 TestPT : 5.03 726.66 2.84 0.00 0.00 175773.74 410.56 603207.56 00:06:07.133 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x2000 00:06:07.133 raid0 : 5.02 12453.17 48.65 0.00 0.00 10251.04 433.77 10853.17 00:06:07.133 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x2000 length 0x2000 00:06:07.133 raid0 : 5.01 16435.14 64.20 0.00 0.00 7771.47 321.31 9025.26 00:06:07.133 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x2000 00:06:07.133 concat0 : 5.02 12467.60 48.70 0.00 0.00 10239.75 410.56 11081.65 00:06:07.133 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x2000 length 0x2000 00:06:07.133 concat0 : 5.01 16434.89 64.20 0.00 0.00 7770.74 326.67 9539.36 00:06:07.133 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x1000 00:06:07.133 raid1 : 5.02 12467.27 48.70 0.00 0.00 10238.18 401.64 11081.65 00:06:07.133 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x1000 length 0x1000 00:06:07.133 raid1 : 5.01 16434.56 64.20 0.00 0.00 7769.79 385.57 9653.61 00:06:07.133 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x0 length 0x4e2 00:06:07.133 AIO0 : 5.15 690.28 2.70 0.00 0.00 182953.73 4027.10 387515.16 00:06:07.133 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:07.133 Verification LBA range: start 0x4e2 length 0x4e2 00:06:07.133 AIO0 : 5.15 701.31 2.74 0.00 0.00 180068.58 6369.09 310743.29 00:06:07.133 =================================================================================================================== 00:06:07.133 Total : 401207.44 1567.22 0.00 0.00 10201.65 175.83 5030385.48 00:06:07.392 00:06:07.392 real 0m6.564s 00:06:07.392 user 0m11.053s 00:06:07.392 sys 0m0.864s 00:06:07.392 05:58:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.392 ************************************ 00:06:07.392 END TEST bdev_verify 00:06:07.392 ************************************ 00:06:07.392 05:58:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.392 05:58:15 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:07.392 05:58:15 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:06:07.392 05:58:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.392 05:58:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.392 ************************************ 00:06:07.392 START TEST bdev_verify_big_io 00:06:07.392 ************************************ 00:06:07.392 05:58:15 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:07.392 [2024-05-13 05:58:15.577694] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:07.392 [2024-05-13 05:58:15.577985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:07.961 EAL: TSC is not safe to use in SMP mode 00:06:07.961 EAL: TSC is not invariant 00:06:07.961 [2024-05-13 05:58:16.012409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.961 [2024-05-13 05:58:16.104997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.961 [2024-05-13 05:58:16.104997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.961 [2024-05-13 05:58:16.159895] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:07.961 [2024-05-13 05:58:16.159928] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:07.961 [2024-05-13 05:58:16.167889] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:07.961 [2024-05-13 05:58:16.167910] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:07.961 [2024-05-13 05:58:16.175901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:07.961 [2024-05-13 05:58:16.175922] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:07.961 [2024-05-13 05:58:16.175928] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:07.961 [2024-05-13 05:58:16.223904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:07.961 [2024-05-13 05:58:16.223939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.961 [2024-05-13 05:58:16.223950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82cbda800 00:06:07.961 [2024-05-13 05:58:16.223956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.961 [2024-05-13 05:58:16.224229] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.961 [2024-05-13 05:58:16.224249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:08.222 [2024-05-13 05:58:16.324667] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.324802] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.324863] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.324931] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.324999] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325060] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325147] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325262] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325349] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325467] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325592] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325701] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325792] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325888] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.325991] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.326100] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:08.222 [2024-05-13 05:58:16.327194] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:08.222 [2024-05-13 05:58:16.327336] bdevperf.c:1810:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:08.222 Running I/O for 5 seconds... 00:06:13.501 00:06:13.501 Latency(us) 00:06:13.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:13.501 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x100 00:06:13.501 Malloc0 : 5.05 4829.68 301.86 0.00 0.00 26390.43 1827.90 61234.71 00:06:13.501 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x100 length 0x100 00:06:13.501 Malloc0 : 5.06 5331.93 333.25 0.00 0.00 23933.78 1820.76 83626.50 00:06:13.501 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x80 00:06:13.501 Malloc1p0 : 5.06 3172.04 198.25 0.00 0.00 40123.33 2941.78 88653.23 00:06:13.501 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x80 length 0x80 00:06:13.501 Malloc1p0 : 5.06 2668.69 166.79 0.00 0.00 47773.00 2798.97 78142.80 00:06:13.501 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x80 00:06:13.501 Malloc1p1 : 5.07 1226.18 76.64 0.00 0.00 103820.75 2570.49 185532.02 00:06:13.501 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x80 length 0x80 00:06:13.501 Malloc1p1 : 5.07 1359.78 84.99 0.00 0.00 93702.80 2556.21 168166.96 00:06:13.501 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p0 : 5.06 814.66 50.92 0.00 0.00 39067.26 767.58 53466.12 00:06:13.501 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p0 : 5.06 897.62 56.10 0.00 0.00 35460.56 767.58 47068.47 00:06:13.501 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p1 : 5.06 814.63 50.91 0.00 0.00 39057.79 771.15 52780.66 00:06:13.501 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p1 : 5.06 897.58 56.10 0.00 0.00 35450.80 785.43 47296.96 00:06:13.501 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p2 : 5.06 814.59 50.91 0.00 0.00 39045.57 778.29 52552.17 00:06:13.501 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p2 : 5.06 897.55 56.10 0.00 0.00 35440.25 781.86 47525.44 00:06:13.501 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p3 : 5.06 814.55 50.91 0.00 0.00 39035.35 767.58 52780.66 00:06:13.501 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p3 : 5.06 897.51 56.09 0.00 0.00 35430.82 767.58 47753.93 00:06:13.501 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p4 : 5.06 814.51 50.91 0.00 0.00 39025.16 778.29 53237.64 00:06:13.501 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p4 : 5.06 897.47 56.09 0.00 0.00 35418.65 781.86 47982.42 00:06:13.501 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p5 : 5.06 814.47 50.90 0.00 0.00 39012.87 767.58 53466.12 00:06:13.501 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p5 : 5.06 897.43 56.09 0.00 0.00 35409.24 764.01 48210.91 00:06:13.501 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p6 : 5.06 814.43 50.90 0.00 0.00 39004.25 813.99 53694.61 00:06:13.501 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p6 : 5.06 897.39 56.09 0.00 0.00 35401.20 792.57 48439.40 00:06:13.501 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x20 00:06:13.501 Malloc2p7 : 5.06 814.40 50.90 0.00 0.00 38990.61 771.15 54151.59 00:06:13.501 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x20 length 0x20 00:06:13.501 Malloc2p7 : 5.06 897.35 56.08 0.00 0.00 35389.42 771.15 48896.37 00:06:13.501 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x100 00:06:13.501 TestPT : 5.13 1004.40 62.78 0.00 0.00 125632.96 3341.63 237627.22 00:06:13.501 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x100 length 0x100 00:06:13.501 TestPT : 5.25 24.77 1.55 0.00 0.00 5084667.46 2813.25 5205864.04 00:06:13.501 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x200 00:06:13.501 raid0 : 5.07 1232.46 77.03 0.00 0.00 102899.55 2713.29 186445.97 00:06:13.501 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x200 length 0x200 00:06:13.501 raid0 : 5.07 1360.75 85.05 0.00 0.00 93244.08 2756.13 169080.91 00:06:13.501 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.501 Verification LBA range: start 0x0 length 0x200 00:06:13.501 concat0 : 5.07 1232.42 77.03 0.00 0.00 102767.22 2727.57 186445.97 00:06:13.502 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.502 Verification LBA range: start 0x200 length 0x200 00:06:13.502 concat0 : 5.07 1359.76 84.98 0.00 0.00 93172.58 2770.41 169080.91 00:06:13.502 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:13.502 Verification LBA range: start 0x0 length 0x100 00:06:13.502 raid1 : 5.07 1239.36 77.46 0.00 0.00 102116.40 3513.00 186445.97 00:06:13.502 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:13.502 Verification LBA range: start 0x100 length 0x100 00:06:13.502 raid1 : 5.07 1365.62 85.35 0.00 0.00 92687.12 2998.90 169080.91 00:06:13.502 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:13.502 Verification LBA range: start 0x0 length 0x4e 00:06:13.502 AIO0 : 5.07 1211.85 75.74 0.00 0.00 63542.27 4198.46 107846.20 00:06:13.502 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:13.502 Verification LBA range: start 0x4e length 0x4e 00:06:13.502 AIO0 : 5.07 1350.59 84.41 0.00 0.00 57038.43 1149.58 98249.72 00:06:13.502 =================================================================================================================== 00:06:13.502 Total : 43666.40 2729.15 0.00 0.00 56028.52 764.01 5205864.04 00:06:13.761 00:06:13.761 real 0m6.233s 00:06:13.761 user 0m11.229s 00:06:13.761 sys 0m0.585s 00:06:13.761 05:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.761 05:58:21 -- common/autotest_common.sh@10 -- # set +x 00:06:13.761 ************************************ 00:06:13.761 END TEST bdev_verify_big_io 00:06:13.761 ************************************ 00:06:13.761 05:58:21 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:13.761 05:58:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:13.761 05:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.761 05:58:21 -- common/autotest_common.sh@10 -- # set +x 00:06:13.761 ************************************ 00:06:13.761 START TEST bdev_write_zeroes 00:06:13.761 ************************************ 00:06:13.761 05:58:21 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:13.761 [2024-05-13 05:58:21.864449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:13.761 [2024-05-13 05:58:21.864730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:14.020 EAL: TSC is not safe to use in SMP mode 00:06:14.020 EAL: TSC is not invariant 00:06:14.020 [2024-05-13 05:58:22.278694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.280 [2024-05-13 05:58:22.350666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.280 [2024-05-13 05:58:22.405693] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:14.280 [2024-05-13 05:58:22.405768] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:14.280 [2024-05-13 05:58:22.413687] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:14.280 [2024-05-13 05:58:22.413714] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:14.280 [2024-05-13 05:58:22.421699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:14.280 [2024-05-13 05:58:22.421720] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:14.280 [2024-05-13 05:58:22.421726] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:14.280 [2024-05-13 05:58:22.469698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:14.280 [2024-05-13 05:58:22.469763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:14.280 [2024-05-13 05:58:22.469774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b1ed800 00:06:14.280 [2024-05-13 05:58:22.469780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:14.280 [2024-05-13 05:58:22.470040] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:14.280 [2024-05-13 05:58:22.470081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:14.540 Running I/O for 1 seconds... 00:06:15.512 00:06:15.512 Latency(us) 00:06:15.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:15.512 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc0 : 1.01 38169.46 149.10 0.00 0.00 3353.09 132.09 5883.56 00:06:15.512 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc1p0 : 1.01 38156.83 149.05 0.00 0.00 3352.99 164.23 5797.88 00:06:15.512 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc1p1 : 1.01 38154.02 149.04 0.00 0.00 3352.08 157.98 5826.44 00:06:15.512 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p0 : 1.01 38151.33 149.03 0.00 0.00 3351.46 154.41 5740.75 00:06:15.512 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p1 : 1.01 38148.52 149.02 0.00 0.00 3350.44 160.66 5626.51 00:06:15.512 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p2 : 1.01 38145.91 149.01 0.00 0.00 3349.84 149.05 5569.39 00:06:15.512 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p3 : 1.01 38143.26 149.00 0.00 0.00 3348.97 149.05 5655.07 00:06:15.512 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p4 : 1.01 38136.40 148.97 0.00 0.00 3347.93 150.84 5855.00 00:06:15.512 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p5 : 1.01 38133.37 148.96 0.00 0.00 3347.73 148.16 5797.88 00:06:15.512 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p6 : 1.01 38130.80 148.95 0.00 0.00 3346.65 149.05 5883.56 00:06:15.512 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 Malloc2p7 : 1.01 38128.22 148.94 0.00 0.00 3345.85 148.16 5797.88 00:06:15.512 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 TestPT : 1.01 38125.50 148.93 0.00 0.00 3344.94 149.95 5683.63 00:06:15.512 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 raid0 : 1.01 38121.79 148.91 0.00 0.00 3344.16 196.36 5740.75 00:06:15.512 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 concat0 : 1.01 38118.15 148.90 0.00 0.00 3343.13 192.79 5826.44 00:06:15.512 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 raid1 : 1.01 38113.54 148.88 0.00 0.00 3342.10 364.15 5597.95 00:06:15.512 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:15.512 AIO0 : 1.09 1533.27 5.99 0.00 0.00 79758.66 549.80 218434.25 00:06:15.512 =================================================================================================================== 00:06:15.512 Total : 573610.36 2240.67 0.00 0.00 3568.99 132.09 218434.25 00:06:15.771 00:06:15.771 real 0m2.005s 00:06:15.771 user 0m1.402s 00:06:15.771 sys 0m0.476s 00:06:15.772 05:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.772 05:58:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.772 ************************************ 00:06:15.772 END TEST bdev_write_zeroes 00:06:15.772 ************************************ 00:06:15.772 05:58:23 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:15.772 05:58:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:15.772 05:58:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.772 05:58:23 -- common/autotest_common.sh@10 -- # set +x 00:06:15.772 ************************************ 00:06:15.772 START TEST bdev_json_nonenclosed 00:06:15.772 ************************************ 00:06:15.772 05:58:23 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:15.772 [2024-05-13 05:58:23.921211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:15.772 [2024-05-13 05:58:23.921574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:16.339 EAL: TSC is not safe to use in SMP mode 00:06:16.339 EAL: TSC is not invariant 00:06:16.339 [2024-05-13 05:58:24.343621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.339 [2024-05-13 05:58:24.432532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.339 [2024-05-13 05:58:24.432648] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:16.339 [2024-05-13 05:58:24.432656] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.339 00:06:16.339 real 0m0.605s 00:06:16.339 user 0m0.139s 00:06:16.339 sys 0m0.456s 00:06:16.339 05:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.339 05:58:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.339 ************************************ 00:06:16.339 END TEST bdev_json_nonenclosed 00:06:16.339 ************************************ 00:06:16.339 05:58:24 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.340 05:58:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:16.340 05:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.340 05:58:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.340 ************************************ 00:06:16.340 START TEST bdev_json_nonarray 00:06:16.340 ************************************ 00:06:16.340 05:58:24 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:16.340 [2024-05-13 05:58:24.581332] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:16.340 [2024-05-13 05:58:24.581690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:17.277 EAL: TSC is not safe to use in SMP mode 00:06:17.277 EAL: TSC is not invariant 00:06:17.277 [2024-05-13 05:58:25.307270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.277 [2024-05-13 05:58:25.395888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.277 [2024-05-13 05:58:25.396018] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:17.277 [2024-05-13 05:58:25.396027] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.277 00:06:17.277 real 0m0.919s 00:06:17.277 user 0m0.153s 00:06:17.277 sys 0m0.764s 00:06:17.277 05:58:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.277 05:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:17.277 ************************************ 00:06:17.277 END TEST bdev_json_nonarray 00:06:17.277 ************************************ 00:06:17.277 05:58:25 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:06:17.277 05:58:25 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:06:17.277 05:58:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:17.277 05:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.277 05:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:17.277 ************************************ 00:06:17.277 START TEST bdev_qos 00:06:17.277 ************************************ 00:06:17.277 05:58:25 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:06:17.277 05:58:25 -- bdev/blockdev.sh@444 -- # QOS_PID=47289 00:06:17.277 05:58:25 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 47289' 00:06:17.277 Process qos testing pid: 47289 00:06:17.277 05:58:25 -- bdev/blockdev.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:17.277 05:58:25 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:17.277 05:58:25 -- bdev/blockdev.sh@447 -- # waitforlisten 47289 00:06:17.277 05:58:25 -- common/autotest_common.sh@819 -- # '[' -z 47289 ']' 00:06:17.277 05:58:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.277 05:58:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.277 05:58:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.277 05:58:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.277 05:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:17.277 [2024-05-13 05:58:25.558277] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:17.277 [2024-05-13 05:58:25.558533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:18.214 EAL: TSC is not safe to use in SMP mode 00:06:18.214 EAL: TSC is not invariant 00:06:18.214 [2024-05-13 05:58:26.286361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.214 [2024-05-13 05:58:26.375526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.214 05:58:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.214 05:58:26 -- common/autotest_common.sh@852 -- # return 0 00:06:18.214 05:58:26 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:18.214 05:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.214 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.214 Malloc_0 00:06:18.214 05:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:18.214 05:58:26 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:06:18.214 05:58:26 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:06:18.214 05:58:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:18.214 05:58:26 -- common/autotest_common.sh@889 -- # local i 00:06:18.214 05:58:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:18.214 05:58:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:18.214 05:58:26 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:18.214 05:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.214 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.214 05:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:18.214 05:58:26 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:18.214 05:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.214 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.214 [ 00:06:18.214 { 00:06:18.214 "name": "Malloc_0", 00:06:18.214 "aliases": [ 00:06:18.214 "d0970dcd-10ed-11ef-ba60-3508ead7bdda" 00:06:18.214 ], 00:06:18.214 "product_name": "Malloc disk", 00:06:18.214 "block_size": 512, 00:06:18.214 "num_blocks": 262144, 00:06:18.214 "uuid": "d0970dcd-10ed-11ef-ba60-3508ead7bdda", 00:06:18.214 "assigned_rate_limits": { 00:06:18.214 "rw_ios_per_sec": 0, 00:06:18.214 "rw_mbytes_per_sec": 0, 00:06:18.214 "r_mbytes_per_sec": 0, 00:06:18.214 "w_mbytes_per_sec": 0 00:06:18.214 }, 00:06:18.214 "claimed": false, 00:06:18.214 "zoned": false, 00:06:18.214 "supported_io_types": { 00:06:18.214 "read": true, 00:06:18.214 "write": true, 00:06:18.214 "unmap": true, 00:06:18.214 "write_zeroes": true, 00:06:18.214 "flush": true, 00:06:18.214 "reset": true, 00:06:18.214 "compare": false, 00:06:18.214 "compare_and_write": false, 00:06:18.214 "abort": true, 00:06:18.214 "nvme_admin": false, 00:06:18.214 "nvme_io": false 00:06:18.214 }, 00:06:18.214 "memory_domains": [ 00:06:18.214 { 00:06:18.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:18.214 "dma_device_type": 2 00:06:18.214 } 00:06:18.214 ], 00:06:18.214 "driver_specific": {} 00:06:18.214 } 00:06:18.214 ] 00:06:18.214 05:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:18.214 05:58:26 -- common/autotest_common.sh@895 -- # return 0 00:06:18.214 05:58:26 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:18.214 05:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.214 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.214 Null_1 00:06:18.214 05:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:18.214 05:58:26 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:06:18.214 05:58:26 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:06:18.214 05:58:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:18.214 05:58:26 -- common/autotest_common.sh@889 -- # local i 00:06:18.214 05:58:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:18.214 05:58:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:18.214 05:58:26 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:18.214 05:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.214 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.474 05:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:18.474 05:58:26 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:18.474 05:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:18.474 05:58:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.474 [ 00:06:18.474 { 00:06:18.474 "name": "Null_1", 00:06:18.474 "aliases": [ 00:06:18.474 "d09d27ed-10ed-11ef-ba60-3508ead7bdda" 00:06:18.474 ], 00:06:18.474 "product_name": "Null disk", 00:06:18.474 "block_size": 512, 00:06:18.474 "num_blocks": 262144, 00:06:18.474 "uuid": "d09d27ed-10ed-11ef-ba60-3508ead7bdda", 00:06:18.474 "assigned_rate_limits": { 00:06:18.474 "rw_ios_per_sec": 0, 00:06:18.474 "rw_mbytes_per_sec": 0, 00:06:18.474 "r_mbytes_per_sec": 0, 00:06:18.474 "w_mbytes_per_sec": 0 00:06:18.474 }, 00:06:18.474 "claimed": false, 00:06:18.474 "zoned": false, 00:06:18.474 "supported_io_types": { 00:06:18.474 "read": true, 00:06:18.474 "write": true, 00:06:18.474 "unmap": false, 00:06:18.474 "write_zeroes": true, 00:06:18.474 "flush": false, 00:06:18.474 "reset": true, 00:06:18.474 "compare": false, 00:06:18.474 "compare_and_write": false, 00:06:18.474 "abort": true, 00:06:18.474 "nvme_admin": false, 00:06:18.474 "nvme_io": false 00:06:18.474 }, 00:06:18.474 "driver_specific": {} 00:06:18.474 } 00:06:18.474 ] 00:06:18.474 05:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:18.474 05:58:26 -- common/autotest_common.sh@895 -- # return 0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@455 -- # qos_function_test 00:06:18.474 05:58:26 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:06:18.474 05:58:26 -- bdev/blockdev.sh@454 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:18.474 05:58:26 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:06:18.474 05:58:26 -- bdev/blockdev.sh@410 -- # local io_result=0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:06:18.474 05:58:26 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:18.474 05:58:26 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:06:18.474 05:58:26 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:18.474 05:58:26 -- bdev/blockdev.sh@376 -- # tail -1 00:06:18.474 Running I/O for 60 seconds... 00:06:23.756 05:58:32 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 813764.88 3255059.52 0.00 0.00 3513344.00 0.00 0.00 ' 00:06:23.756 05:58:32 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:06:23.756 05:58:32 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:06:23.756 05:58:32 -- bdev/blockdev.sh@378 -- # iostat_result=813764.88 00:06:23.756 05:58:32 -- bdev/blockdev.sh@383 -- # echo 813764 00:06:23.757 05:58:32 -- bdev/blockdev.sh@414 -- # io_result=813764 00:06:23.757 05:58:32 -- bdev/blockdev.sh@416 -- # iops_limit=203000 00:06:23.757 05:58:32 -- bdev/blockdev.sh@417 -- # '[' 203000 -gt 1000 ']' 00:06:23.757 05:58:32 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 203000 Malloc_0 00:06:23.757 05:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:23.757 05:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:23.757 05:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:23.757 05:58:32 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 203000 IOPS Malloc_0 00:06:23.757 05:58:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:23.757 05:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.757 05:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:24.016 ************************************ 00:06:24.016 START TEST bdev_qos_iops 00:06:24.016 ************************************ 00:06:24.016 05:58:32 -- common/autotest_common.sh@1104 -- # run_qos_test 203000 IOPS Malloc_0 00:06:24.016 05:58:32 -- bdev/blockdev.sh@387 -- # local qos_limit=203000 00:06:24.016 05:58:32 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:06:24.016 05:58:32 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:06:24.017 05:58:32 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:06:24.017 05:58:32 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:06:24.017 05:58:32 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:24.017 05:58:32 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:24.017 05:58:32 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:06:24.017 05:58:32 -- bdev/blockdev.sh@376 -- # tail -1 00:06:29.311 05:58:37 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 202991.41 811965.65 0.00 0.00 869652.00 0.00 0.00 ' 00:06:29.311 05:58:37 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:06:29.311 05:58:37 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:06:29.311 05:58:37 -- bdev/blockdev.sh@378 -- # iostat_result=202991.41 00:06:29.311 05:58:37 -- bdev/blockdev.sh@383 -- # echo 202991 00:06:29.311 05:58:37 -- bdev/blockdev.sh@390 -- # qos_result=202991 00:06:29.312 05:58:37 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:06:29.312 05:58:37 -- bdev/blockdev.sh@394 -- # lower_limit=182700 00:06:29.312 05:58:37 -- bdev/blockdev.sh@395 -- # upper_limit=223300 00:06:29.312 05:58:37 -- bdev/blockdev.sh@398 -- # '[' 202991 -lt 182700 ']' 00:06:29.312 05:58:37 -- bdev/blockdev.sh@398 -- # '[' 202991 -gt 223300 ']' 00:06:29.312 00:06:29.312 real 0m5.496s 00:06:29.312 user 0m0.119s 00:06:29.312 sys 0m0.026s 00:06:29.312 05:58:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.312 05:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:29.312 ************************************ 00:06:29.312 END TEST bdev_qos_iops 00:06:29.312 ************************************ 00:06:29.312 05:58:37 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:06:29.312 05:58:37 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:06:29.312 05:58:37 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:06:29.312 05:58:37 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:29.312 05:58:37 -- bdev/blockdev.sh@376 -- # tail -1 00:06:29.312 05:58:37 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:29.312 05:58:37 -- bdev/blockdev.sh@376 -- # grep Null_1 00:06:35.879 05:58:43 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 482882.39 1931529.54 0.00 0.00 2079744.00 0.00 0.00 ' 00:06:35.879 05:58:43 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:06:35.879 05:58:43 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:35.879 05:58:43 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:06:35.879 05:58:43 -- bdev/blockdev.sh@380 -- # iostat_result=2079744.00 00:06:35.879 05:58:43 -- bdev/blockdev.sh@383 -- # echo 2079744 00:06:35.879 05:58:43 -- bdev/blockdev.sh@425 -- # bw_limit=2079744 00:06:35.879 05:58:43 -- bdev/blockdev.sh@426 -- # bw_limit=203 00:06:35.879 05:58:43 -- bdev/blockdev.sh@427 -- # '[' 203 -lt 2 ']' 00:06:35.879 05:58:43 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 203 Null_1 00:06:35.879 05:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.879 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:35.879 05:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.879 05:58:43 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 203 BANDWIDTH Null_1 00:06:35.879 05:58:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:35.879 05:58:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.879 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:35.879 ************************************ 00:06:35.879 START TEST bdev_qos_bw 00:06:35.879 ************************************ 00:06:35.879 05:58:43 -- common/autotest_common.sh@1104 -- # run_qos_test 203 BANDWIDTH Null_1 00:06:35.879 05:58:43 -- bdev/blockdev.sh@387 -- # local qos_limit=203 00:06:35.880 05:58:43 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:06:35.880 05:58:43 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:06:35.880 05:58:43 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:06:35.880 05:58:43 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:06:35.880 05:58:43 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:35.880 05:58:43 -- bdev/blockdev.sh@376 -- # grep Null_1 00:06:35.880 05:58:43 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:35.880 05:58:43 -- bdev/blockdev.sh@376 -- # tail -1 00:06:41.150 05:58:48 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 51972.06 207888.26 0.00 0.00 224708.00 0.00 0.00 ' 00:06:41.150 05:58:48 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:06:41.150 05:58:48 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:41.150 05:58:48 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:06:41.150 05:58:48 -- bdev/blockdev.sh@380 -- # iostat_result=224708.00 00:06:41.150 05:58:48 -- bdev/blockdev.sh@383 -- # echo 224708 00:06:41.150 05:58:48 -- bdev/blockdev.sh@390 -- # qos_result=224708 00:06:41.150 05:58:48 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:41.150 05:58:48 -- bdev/blockdev.sh@392 -- # qos_limit=207872 00:06:41.150 05:58:48 -- bdev/blockdev.sh@394 -- # lower_limit=187084 00:06:41.150 05:58:48 -- bdev/blockdev.sh@395 -- # upper_limit=228659 00:06:41.150 05:58:48 -- bdev/blockdev.sh@398 -- # '[' 224708 -lt 187084 ']' 00:06:41.150 05:58:48 -- bdev/blockdev.sh@398 -- # '[' 224708 -gt 228659 ']' 00:06:41.150 00:06:41.150 real 0m5.525s 00:06:41.150 user 0m0.110s 00:06:41.150 sys 0m0.039s 00:06:41.150 05:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.150 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.150 ************************************ 00:06:41.150 END TEST bdev_qos_bw 00:06:41.150 ************************************ 00:06:41.150 05:58:48 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:41.150 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.150 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.150 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.150 05:58:48 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:41.150 05:58:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:41.150 05:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.150 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.150 ************************************ 00:06:41.150 START TEST bdev_qos_ro_bw 00:06:41.150 ************************************ 00:06:41.150 05:58:48 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:41.150 05:58:48 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:06:41.150 05:58:48 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:06:41.150 05:58:48 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:06:41.150 05:58:48 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:06:41.150 05:58:48 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:06:41.150 05:58:48 -- bdev/blockdev.sh@375 -- # local iostat_result 00:06:41.150 05:58:48 -- bdev/blockdev.sh@376 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:41.150 05:58:48 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:06:41.150 05:58:48 -- bdev/blockdev.sh@376 -- # tail -1 00:06:46.453 05:58:54 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.06 2048.26 0.00 0.00 2196.00 0.00 0.00 ' 00:06:46.453 05:58:54 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:06:46.453 05:58:54 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:46.453 05:58:54 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:06:46.453 05:58:54 -- bdev/blockdev.sh@380 -- # iostat_result=2196.00 00:06:46.453 05:58:54 -- bdev/blockdev.sh@383 -- # echo 2196 00:06:46.453 05:58:54 -- bdev/blockdev.sh@390 -- # qos_result=2196 00:06:46.453 05:58:54 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:46.453 05:58:54 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:06:46.453 05:58:54 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:06:46.453 05:58:54 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:06:46.453 05:58:54 -- bdev/blockdev.sh@398 -- # '[' 2196 -lt 1843 ']' 00:06:46.453 05:58:54 -- bdev/blockdev.sh@398 -- # '[' 2196 -gt 2252 ']' 00:06:46.453 00:06:46.453 real 0m5.386s 00:06:46.453 user 0m0.121s 00:06:46.453 sys 0m0.025s 00:06:46.453 05:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.453 05:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.453 ************************************ 00:06:46.453 END TEST bdev_qos_ro_bw 00:06:46.453 ************************************ 00:06:46.453 05:58:54 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:46.453 05:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.453 05:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.453 05:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.453 05:58:54 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:06:46.453 05:58:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.453 05:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.453 00:06:46.453 Latency(us) 00:06:46.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.453 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:46.453 Malloc_0 : 28.01 277226.04 1082.91 0.00 0.00 914.75 299.89 504500.87 00:06:46.453 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:46.453 Null_1 : 28.04 375288.68 1465.97 0.00 0.00 682.10 48.87 22848.77 00:06:46.453 =================================================================================================================== 00:06:46.453 Total : 652514.72 2548.89 0.00 0.00 780.88 48.87 504500.87 00:06:46.453 0 00:06:46.453 05:58:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.453 05:58:54 -- bdev/blockdev.sh@459 -- # killprocess 47289 00:06:46.453 05:58:54 -- common/autotest_common.sh@926 -- # '[' -z 47289 ']' 00:06:46.453 05:58:54 -- common/autotest_common.sh@930 -- # kill -0 47289 00:06:46.453 05:58:54 -- common/autotest_common.sh@931 -- # uname 00:06:46.453 05:58:54 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:46.453 05:58:54 -- common/autotest_common.sh@934 -- # ps -c -o command 47289 00:06:46.453 05:58:54 -- common/autotest_common.sh@934 -- # tail -1 00:06:46.453 05:58:54 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:46.453 05:58:54 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:46.453 killing process with pid 47289 00:06:46.453 05:58:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47289' 00:06:46.453 05:58:54 -- common/autotest_common.sh@945 -- # kill 47289 00:06:46.453 Received shutdown signal, test time was about 28.059187 seconds 00:06:46.453 00:06:46.453 Latency(us) 00:06:46.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.453 =================================================================================================================== 00:06:46.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:46.453 05:58:54 -- common/autotest_common.sh@950 -- # wait 47289 00:06:46.713 05:58:54 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:06:46.713 00:06:46.713 real 0m29.287s 00:06:46.713 user 0m29.614s 00:06:46.713 sys 0m1.052s 00:06:46.713 05:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.713 05:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.713 ************************************ 00:06:46.713 END TEST bdev_qos 00:06:46.713 ************************************ 00:06:46.713 05:58:54 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:46.713 05:58:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:46.713 05:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.713 05:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.713 ************************************ 00:06:46.713 START TEST bdev_qd_sampling 00:06:46.713 ************************************ 00:06:46.713 05:58:54 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:06:46.713 05:58:54 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:06:46.713 05:58:54 -- bdev/blockdev.sh@539 -- # QD_PID=47402 00:06:46.713 05:58:54 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 47402' 00:06:46.713 Process bdev QD sampling period testing pid: 47402 00:06:46.713 05:58:54 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:46.713 05:58:54 -- bdev/blockdev.sh@538 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:46.713 05:58:54 -- bdev/blockdev.sh@542 -- # waitforlisten 47402 00:06:46.713 05:58:54 -- common/autotest_common.sh@819 -- # '[' -z 47402 ']' 00:06:46.713 05:58:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.713 05:58:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.713 05:58:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.713 05:58:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.713 05:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:46.713 [2024-05-13 05:58:54.899641] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:46.713 [2024-05-13 05:58:54.900007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:47.281 EAL: TSC is not safe to use in SMP mode 00:06:47.281 EAL: TSC is not invariant 00:06:47.281 [2024-05-13 05:58:55.327714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.281 [2024-05-13 05:58:55.414649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.281 [2024-05-13 05:58:55.414649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.540 05:58:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.540 05:58:55 -- common/autotest_common.sh@852 -- # return 0 00:06:47.540 05:58:55 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:47.540 05:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.540 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.540 Malloc_QD 00:06:47.540 05:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.540 05:58:55 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:06:47.540 05:58:55 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:06:47.540 05:58:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:47.540 05:58:55 -- common/autotest_common.sh@889 -- # local i 00:06:47.540 05:58:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:47.540 05:58:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:47.540 05:58:55 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:47.541 05:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.541 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.541 05:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.541 05:58:55 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:47.541 05:58:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.541 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.541 [ 00:06:47.541 { 00:06:47.541 "name": "Malloc_QD", 00:06:47.541 "aliases": [ 00:06:47.541 "e213d1cd-10ed-11ef-ba60-3508ead7bdda" 00:06:47.541 ], 00:06:47.541 "product_name": "Malloc disk", 00:06:47.541 "block_size": 512, 00:06:47.541 "num_blocks": 262144, 00:06:47.541 "uuid": "e213d1cd-10ed-11ef-ba60-3508ead7bdda", 00:06:47.541 "assigned_rate_limits": { 00:06:47.541 "rw_ios_per_sec": 0, 00:06:47.541 "rw_mbytes_per_sec": 0, 00:06:47.541 "r_mbytes_per_sec": 0, 00:06:47.541 "w_mbytes_per_sec": 0 00:06:47.541 }, 00:06:47.541 "claimed": false, 00:06:47.541 "zoned": false, 00:06:47.541 "supported_io_types": { 00:06:47.541 "read": true, 00:06:47.541 "write": true, 00:06:47.541 "unmap": true, 00:06:47.541 "write_zeroes": true, 00:06:47.541 "flush": true, 00:06:47.541 "reset": true, 00:06:47.541 "compare": false, 00:06:47.541 "compare_and_write": false, 00:06:47.541 "abort": true, 00:06:47.541 "nvme_admin": false, 00:06:47.541 "nvme_io": false 00:06:47.541 }, 00:06:47.541 "memory_domains": [ 00:06:47.541 { 00:06:47.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.541 "dma_device_type": 2 00:06:47.541 } 00:06:47.541 ], 00:06:47.541 "driver_specific": {} 00:06:47.541 } 00:06:47.541 ] 00:06:47.541 05:58:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.541 05:58:55 -- common/autotest_common.sh@895 -- # return 0 00:06:47.800 05:58:55 -- bdev/blockdev.sh@548 -- # sleep 2 00:06:47.800 05:58:55 -- bdev/blockdev.sh@547 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:47.800 Running I/O for 5 seconds... 00:06:49.704 05:58:57 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:06:49.704 05:58:57 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:06:49.704 05:58:57 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:06:49.704 05:58:57 -- bdev/blockdev.sh@519 -- # local iostats 00:06:49.704 05:58:57 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:49.704 05:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.704 05:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:49.704 05:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.704 05:58:57 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:49.704 05:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.704 05:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:49.964 05:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.964 05:58:58 -- bdev/blockdev.sh@523 -- # iostats='{ 00:06:49.964 "tick_rate": 2294600415, 00:06:49.964 "ticks": 725644709194, 00:06:49.964 "bdevs": [ 00:06:49.965 { 00:06:49.965 "name": "Malloc_QD", 00:06:49.965 "bytes_read": 16213119488, 00:06:49.965 "num_read_ops": 3958275, 00:06:49.965 "bytes_written": 0, 00:06:49.965 "num_write_ops": 0, 00:06:49.965 "bytes_unmapped": 0, 00:06:49.965 "num_unmap_ops": 0, 00:06:49.965 "bytes_copied": 0, 00:06:49.965 "num_copy_ops": 0, 00:06:49.965 "read_latency_ticks": 2428109727928, 00:06:49.965 "max_read_latency_ticks": 873078, 00:06:49.965 "min_read_latency_ticks": 31194, 00:06:49.965 "write_latency_ticks": 0, 00:06:49.965 "max_write_latency_ticks": 0, 00:06:49.965 "min_write_latency_ticks": 0, 00:06:49.965 "unmap_latency_ticks": 0, 00:06:49.965 "max_unmap_latency_ticks": 0, 00:06:49.965 "min_unmap_latency_ticks": 0, 00:06:49.965 "copy_latency_ticks": 0, 00:06:49.965 "max_copy_latency_ticks": 0, 00:06:49.965 "min_copy_latency_ticks": 0, 00:06:49.965 "io_error": {}, 00:06:49.965 "queue_depth_polling_period": 10, 00:06:49.965 "queue_depth": 512, 00:06:49.965 "io_time": 450, 00:06:49.965 "weighted_io_time": 230400 00:06:49.965 } 00:06:49.965 ] 00:06:49.965 }' 00:06:49.965 05:58:58 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:49.965 05:58:58 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:06:49.965 05:58:58 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:06:49.965 05:58:58 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:06:49.965 05:58:58 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:49.965 05:58:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:49.965 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.965 00:06:49.965 Latency(us) 00:06:49.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.965 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:49.965 Malloc_QD : 2.10 954684.34 3729.24 0.00 0.00 267.96 44.85 382.00 00:06:49.965 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:49.965 Malloc_QD : 2.10 958761.35 3745.16 0.00 0.00 266.83 40.61 380.22 00:06:49.965 =================================================================================================================== 00:06:49.965 Total : 1913445.69 7474.40 0.00 0.00 267.40 40.61 382.00 00:06:49.965 0 00:06:49.965 05:58:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:49.965 05:58:58 -- bdev/blockdev.sh@552 -- # killprocess 47402 00:06:49.965 05:58:58 -- common/autotest_common.sh@926 -- # '[' -z 47402 ']' 00:06:49.965 05:58:58 -- common/autotest_common.sh@930 -- # kill -0 47402 00:06:49.965 05:58:58 -- common/autotest_common.sh@931 -- # uname 00:06:49.965 05:58:58 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:49.965 05:58:58 -- common/autotest_common.sh@934 -- # ps -c -o command 47402 00:06:49.965 05:58:58 -- common/autotest_common.sh@934 -- # tail -1 00:06:49.965 05:58:58 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:49.965 05:58:58 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:49.965 killing process with pid 47402 00:06:49.965 05:58:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47402' 00:06:49.965 05:58:58 -- common/autotest_common.sh@945 -- # kill 47402 00:06:49.965 Received shutdown signal, test time was about 2.138003 seconds 00:06:49.965 00:06:49.965 Latency(us) 00:06:49.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.965 =================================================================================================================== 00:06:49.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:49.965 05:58:58 -- common/autotest_common.sh@950 -- # wait 47402 00:06:49.965 05:58:58 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:06:49.965 00:06:49.965 real 0m3.308s 00:06:49.965 user 0m6.017s 00:06:49.965 sys 0m0.518s 00:06:49.965 05:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.965 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.965 ************************************ 00:06:49.965 END TEST bdev_qd_sampling 00:06:49.965 ************************************ 00:06:49.965 05:58:58 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:06:49.965 05:58:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:49.965 05:58:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.965 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.965 ************************************ 00:06:49.965 START TEST bdev_error 00:06:49.965 ************************************ 00:06:49.965 05:58:58 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:06:49.965 05:58:58 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:06:49.965 05:58:58 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:06:49.965 05:58:58 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:06:49.965 05:58:58 -- bdev/blockdev.sh@470 -- # ERR_PID=47433 00:06:49.965 05:58:58 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 47433' 00:06:49.965 Process error testing pid: 47433 00:06:49.965 05:58:58 -- bdev/blockdev.sh@472 -- # waitforlisten 47433 00:06:49.965 05:58:58 -- bdev/blockdev.sh@469 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:49.965 05:58:58 -- common/autotest_common.sh@819 -- # '[' -z 47433 ']' 00:06:49.965 05:58:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.965 05:58:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.965 05:58:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.965 05:58:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.965 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.965 [2024-05-13 05:58:58.259395] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:49.965 [2024-05-13 05:58:58.259661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:50.904 EAL: TSC is not safe to use in SMP mode 00:06:50.904 EAL: TSC is not invariant 00:06:50.904 [2024-05-13 05:58:59.072877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.904 [2024-05-13 05:58:59.150974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.844 05:58:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.844 05:58:59 -- common/autotest_common.sh@852 -- # return 0 00:06:51.844 05:58:59 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:51.844 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.844 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 Dev_1 00:06:51.844 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.844 05:58:59 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:06:51.844 05:58:59 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:06:51.844 05:58:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:51.844 05:58:59 -- common/autotest_common.sh@889 -- # local i 00:06:51.844 05:58:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:51.844 05:58:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:51.844 05:58:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:51.844 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.844 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.844 05:58:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:51.844 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.844 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.844 [ 00:06:51.844 { 00:06:51.844 "name": "Dev_1", 00:06:51.844 "aliases": [ 00:06:51.844 "e4830d9b-10ed-11ef-ba60-3508ead7bdda" 00:06:51.844 ], 00:06:51.844 "product_name": "Malloc disk", 00:06:51.844 "block_size": 512, 00:06:51.844 "num_blocks": 262144, 00:06:51.844 "uuid": "e4830d9b-10ed-11ef-ba60-3508ead7bdda", 00:06:51.844 "assigned_rate_limits": { 00:06:51.844 "rw_ios_per_sec": 0, 00:06:51.844 "rw_mbytes_per_sec": 0, 00:06:51.844 "r_mbytes_per_sec": 0, 00:06:51.844 "w_mbytes_per_sec": 0 00:06:51.844 }, 00:06:51.844 "claimed": false, 00:06:51.844 "zoned": false, 00:06:51.844 "supported_io_types": { 00:06:51.844 "read": true, 00:06:51.844 "write": true, 00:06:51.845 "unmap": true, 00:06:51.845 "write_zeroes": true, 00:06:51.845 "flush": true, 00:06:51.845 "reset": true, 00:06:51.845 "compare": false, 00:06:51.845 "compare_and_write": false, 00:06:51.845 "abort": true, 00:06:51.845 "nvme_admin": false, 00:06:51.845 "nvme_io": false 00:06:51.845 }, 00:06:51.845 "memory_domains": [ 00:06:51.845 { 00:06:51.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.845 "dma_device_type": 2 00:06:51.845 } 00:06:51.845 ], 00:06:51.845 "driver_specific": {} 00:06:51.845 } 00:06:51.845 ] 00:06:51.845 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.845 05:58:59 -- common/autotest_common.sh@895 -- # return 0 00:06:51.845 05:58:59 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:06:51.845 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.845 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.845 true 00:06:51.845 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.845 05:58:59 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:51.845 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.845 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.845 Dev_2 00:06:51.845 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.845 05:58:59 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:06:51.845 05:58:59 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:06:51.845 05:58:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:51.845 05:58:59 -- common/autotest_common.sh@889 -- # local i 00:06:51.845 05:58:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:51.845 05:58:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:51.845 05:58:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:51.845 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.845 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.845 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.845 05:58:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:51.845 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.845 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.845 [ 00:06:51.845 { 00:06:51.845 "name": "Dev_2", 00:06:51.845 "aliases": [ 00:06:51.845 "e48afc7f-10ed-11ef-ba60-3508ead7bdda" 00:06:51.845 ], 00:06:51.845 "product_name": "Malloc disk", 00:06:51.845 "block_size": 512, 00:06:51.845 "num_blocks": 262144, 00:06:51.845 "uuid": "e48afc7f-10ed-11ef-ba60-3508ead7bdda", 00:06:51.845 "assigned_rate_limits": { 00:06:51.845 "rw_ios_per_sec": 0, 00:06:51.845 "rw_mbytes_per_sec": 0, 00:06:51.845 "r_mbytes_per_sec": 0, 00:06:51.845 "w_mbytes_per_sec": 0 00:06:51.845 }, 00:06:51.845 "claimed": false, 00:06:51.845 "zoned": false, 00:06:51.845 "supported_io_types": { 00:06:51.845 "read": true, 00:06:51.845 "write": true, 00:06:51.845 "unmap": true, 00:06:51.845 "write_zeroes": true, 00:06:51.845 "flush": true, 00:06:51.845 "reset": true, 00:06:51.845 "compare": false, 00:06:51.845 "compare_and_write": false, 00:06:51.845 "abort": true, 00:06:51.845 "nvme_admin": false, 00:06:51.845 "nvme_io": false 00:06:51.845 }, 00:06:51.845 "memory_domains": [ 00:06:51.845 { 00:06:51.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.845 "dma_device_type": 2 00:06:51.845 } 00:06:51.845 ], 00:06:51.845 "driver_specific": {} 00:06:51.845 } 00:06:51.845 ] 00:06:51.845 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.845 05:58:59 -- common/autotest_common.sh@895 -- # return 0 00:06:51.845 05:58:59 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:51.845 05:58:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.845 05:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:51.845 05:58:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.845 05:58:59 -- bdev/blockdev.sh@482 -- # sleep 1 00:06:51.845 05:58:59 -- bdev/blockdev.sh@481 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:51.845 Running I/O for 5 seconds... 00:06:52.783 Process is existed as continue on error is set. Pid: 47433 00:06:52.784 05:59:01 -- bdev/blockdev.sh@485 -- # kill -0 47433 00:06:52.784 05:59:01 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 47433' 00:06:52.784 05:59:01 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:52.784 05:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.784 05:59:01 -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 05:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.784 05:59:01 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:52.784 05:59:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.784 05:59:01 -- common/autotest_common.sh@10 -- # set +x 00:06:52.784 05:59:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.784 05:59:01 -- bdev/blockdev.sh@495 -- # sleep 5 00:06:53.043 Timeout while waiting for response: 00:06:53.043 00:06:53.043 00:06:57.254 00:06:57.254 Latency(us) 00:06:57.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.254 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:57.254 EE_Dev_1 : 0.96 424409.94 1657.85 5.23 0.00 37.55 18.19 103.09 00:06:57.254 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:57.254 Dev_2 : 5.00 876685.37 3424.55 0.00 0.00 18.09 4.94 18279.02 00:06:57.254 =================================================================================================================== 00:06:57.254 Total : 1301095.31 5082.40 5.23 0.00 19.74 4.94 18279.02 00:06:57.824 05:59:06 -- bdev/blockdev.sh@497 -- # killprocess 47433 00:06:57.824 05:59:06 -- common/autotest_common.sh@926 -- # '[' -z 47433 ']' 00:06:57.824 05:59:06 -- common/autotest_common.sh@930 -- # kill -0 47433 00:06:57.824 05:59:06 -- common/autotest_common.sh@931 -- # uname 00:06:57.825 05:59:06 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:06:57.825 05:59:06 -- common/autotest_common.sh@934 -- # ps -c -o command 47433 00:06:57.825 05:59:06 -- common/autotest_common.sh@934 -- # tail -1 00:06:57.825 05:59:06 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:06:57.825 05:59:06 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:06:57.825 killing process with pid 47433 00:06:57.825 05:59:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47433' 00:06:57.825 05:59:06 -- common/autotest_common.sh@945 -- # kill 47433 00:06:57.825 Received shutdown signal, test time was about 5.000000 seconds 00:06:57.825 00:06:57.825 Latency(us) 00:06:57.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.825 =================================================================================================================== 00:06:57.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:57.825 05:59:06 -- common/autotest_common.sh@950 -- # wait 47433 00:06:58.085 Process error testing pid: 47447 00:06:58.085 05:59:06 -- bdev/blockdev.sh@501 -- # ERR_PID=47447 00:06:58.085 05:59:06 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 47447' 00:06:58.085 05:59:06 -- bdev/blockdev.sh@500 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:06:58.085 05:59:06 -- bdev/blockdev.sh@503 -- # waitforlisten 47447 00:06:58.085 05:59:06 -- common/autotest_common.sh@819 -- # '[' -z 47447 ']' 00:06:58.085 05:59:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.085 05:59:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.085 05:59:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.085 05:59:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.085 05:59:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.085 [2024-05-13 05:59:06.257181] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:58.085 [2024-05-13 05:59:06.257544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:06:58.654 EAL: TSC is not safe to use in SMP mode 00:06:58.654 EAL: TSC is not invariant 00:06:58.654 [2024-05-13 05:59:06.689469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.654 [2024-05-13 05:59:06.777143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.914 05:59:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:58.914 05:59:07 -- common/autotest_common.sh@852 -- # return 0 00:06:58.914 05:59:07 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:58.914 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.914 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.914 Dev_1 00:06:58.914 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.914 05:59:07 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:06:58.914 05:59:07 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:06:58.914 05:59:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:58.914 05:59:07 -- common/autotest_common.sh@889 -- # local i 00:06:58.914 05:59:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:58.914 05:59:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:58.914 05:59:07 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:58.914 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.914 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.914 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.914 05:59:07 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:58.914 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.914 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:58.914 [ 00:06:58.914 { 00:06:58.914 "name": "Dev_1", 00:06:58.914 "aliases": [ 00:06:58.914 "e8da3959-10ed-11ef-ba60-3508ead7bdda" 00:06:58.914 ], 00:06:58.914 "product_name": "Malloc disk", 00:06:58.914 "block_size": 512, 00:06:58.914 "num_blocks": 262144, 00:06:58.914 "uuid": "e8da3959-10ed-11ef-ba60-3508ead7bdda", 00:06:58.914 "assigned_rate_limits": { 00:06:58.914 "rw_ios_per_sec": 0, 00:06:58.914 "rw_mbytes_per_sec": 0, 00:06:58.914 "r_mbytes_per_sec": 0, 00:06:58.914 "w_mbytes_per_sec": 0 00:06:58.914 }, 00:06:58.914 "claimed": false, 00:06:58.914 "zoned": false, 00:06:58.914 "supported_io_types": { 00:06:58.914 "read": true, 00:06:58.914 "write": true, 00:06:58.914 "unmap": true, 00:06:58.914 "write_zeroes": true, 00:06:58.914 "flush": true, 00:06:58.914 "reset": true, 00:06:58.914 "compare": false, 00:06:58.914 "compare_and_write": false, 00:06:58.914 "abort": true, 00:06:58.914 "nvme_admin": false, 00:06:58.914 "nvme_io": false 00:06:58.914 }, 00:06:58.914 "memory_domains": [ 00:06:58.914 { 00:06:58.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.914 "dma_device_type": 2 00:06:58.914 } 00:06:58.914 ], 00:06:58.914 "driver_specific": {} 00:06:58.914 } 00:06:58.914 ] 00:06:58.914 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.914 05:59:07 -- common/autotest_common.sh@895 -- # return 0 00:06:58.914 05:59:07 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:06:58.914 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.914 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.174 true 00:06:59.174 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.174 05:59:07 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:59.174 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.174 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.174 Dev_2 00:06:59.174 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.174 05:59:07 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:06:59.174 05:59:07 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:06:59.174 05:59:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:06:59.174 05:59:07 -- common/autotest_common.sh@889 -- # local i 00:06:59.174 05:59:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:06:59.174 05:59:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:06:59.174 05:59:07 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:06:59.174 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.174 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.174 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.174 05:59:07 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:59.174 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.174 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.174 [ 00:06:59.174 { 00:06:59.174 "name": "Dev_2", 00:06:59.174 "aliases": [ 00:06:59.174 "e8e2287a-10ed-11ef-ba60-3508ead7bdda" 00:06:59.174 ], 00:06:59.174 "product_name": "Malloc disk", 00:06:59.174 "block_size": 512, 00:06:59.174 "num_blocks": 262144, 00:06:59.174 "uuid": "e8e2287a-10ed-11ef-ba60-3508ead7bdda", 00:06:59.174 "assigned_rate_limits": { 00:06:59.174 "rw_ios_per_sec": 0, 00:06:59.174 "rw_mbytes_per_sec": 0, 00:06:59.174 "r_mbytes_per_sec": 0, 00:06:59.174 "w_mbytes_per_sec": 0 00:06:59.174 }, 00:06:59.174 "claimed": false, 00:06:59.174 "zoned": false, 00:06:59.174 "supported_io_types": { 00:06:59.174 "read": true, 00:06:59.174 "write": true, 00:06:59.174 "unmap": true, 00:06:59.174 "write_zeroes": true, 00:06:59.174 "flush": true, 00:06:59.174 "reset": true, 00:06:59.174 "compare": false, 00:06:59.174 "compare_and_write": false, 00:06:59.174 "abort": true, 00:06:59.174 "nvme_admin": false, 00:06:59.174 "nvme_io": false 00:06:59.174 }, 00:06:59.174 "memory_domains": [ 00:06:59.174 { 00:06:59.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.174 "dma_device_type": 2 00:06:59.174 } 00:06:59.174 ], 00:06:59.174 "driver_specific": {} 00:06:59.174 } 00:06:59.174 ] 00:06:59.174 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.175 05:59:07 -- common/autotest_common.sh@895 -- # return 0 00:06:59.175 05:59:07 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:59.175 05:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.175 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.175 05:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.175 05:59:07 -- bdev/blockdev.sh@513 -- # NOT wait 47447 00:06:59.175 05:59:07 -- bdev/blockdev.sh@512 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:59.175 05:59:07 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.175 05:59:07 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 47447 00:06:59.175 05:59:07 -- common/autotest_common.sh@628 -- # local arg=wait 00:06:59.175 05:59:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.175 05:59:07 -- common/autotest_common.sh@632 -- # type -t wait 00:06:59.175 05:59:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.175 05:59:07 -- common/autotest_common.sh@643 -- # wait 47447 00:06:59.175 Running I/O for 5 seconds... 00:06:59.175 task offset: 215288 on job bdev=EE_Dev_1 fails 00:06:59.175 00:06:59.175 Latency(us) 00:06:59.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.175 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:59.175 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:06:59.175 EE_Dev_1 : 0.00 255813.95 999.27 58139.53 0.00 43.08 16.85 79.44 00:06:59.175 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:59.175 Dev_2 : 0.00 301886.79 1179.25 0.00 0.00 25.95 17.29 39.27 00:06:59.175 =================================================================================================================== 00:06:59.175 Total : 557700.75 2178.52 58139.53 0.00 33.79 16.85 79.44 00:06:59.175 [2024-05-13 05:59:07.353722] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.175 request: 00:06:59.175 { 00:06:59.175 "method": "perform_tests", 00:06:59.175 "req_id": 1 00:06:59.175 } 00:06:59.175 Got JSON-RPC error response 00:06:59.175 response: 00:06:59.175 { 00:06:59.175 "code": -32603, 00:06:59.175 "message": "bdevperf failed with error Operation not permitted" 00:06:59.175 } 00:06:59.435 05:59:07 -- common/autotest_common.sh@643 -- # es=255 00:06:59.435 05:59:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:59.435 05:59:07 -- common/autotest_common.sh@652 -- # es=127 00:06:59.435 05:59:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:59.435 05:59:07 -- common/autotest_common.sh@660 -- # es=1 00:06:59.435 05:59:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:59.435 00:06:59.435 real 0m9.282s 00:06:59.435 user 0m9.073s 00:06:59.435 sys 0m1.348s 00:06:59.435 05:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.435 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.435 ************************************ 00:06:59.435 END TEST bdev_error 00:06:59.435 ************************************ 00:06:59.435 05:59:07 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:06:59.435 05:59:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:59.435 05:59:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.435 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.435 ************************************ 00:06:59.435 START TEST bdev_stat 00:06:59.435 ************************************ 00:06:59.435 05:59:07 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:06:59.435 05:59:07 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:06:59.435 05:59:07 -- bdev/blockdev.sh@594 -- # STAT_PID=47470 00:06:59.435 Process Bdev IO statistics testing pid: 47470 00:06:59.435 05:59:07 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 47470' 00:06:59.435 05:59:07 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:06:59.435 05:59:07 -- bdev/blockdev.sh@593 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:06:59.435 05:59:07 -- bdev/blockdev.sh@597 -- # waitforlisten 47470 00:06:59.435 05:59:07 -- common/autotest_common.sh@819 -- # '[' -z 47470 ']' 00:06:59.435 05:59:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.435 05:59:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.435 05:59:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.435 05:59:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.435 05:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:59.435 [2024-05-13 05:59:07.595119] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:59.435 [2024-05-13 05:59:07.595477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:00.005 EAL: TSC is not safe to use in SMP mode 00:07:00.005 EAL: TSC is not invariant 00:07:00.005 [2024-05-13 05:59:08.023050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.005 [2024-05-13 05:59:08.109698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.005 [2024-05-13 05:59:08.109695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.265 05:59:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:00.265 05:59:08 -- common/autotest_common.sh@852 -- # return 0 00:07:00.265 05:59:08 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:07:00.265 05:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.265 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.265 Malloc_STAT 00:07:00.265 05:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.265 05:59:08 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:07:00.265 05:59:08 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:07:00.265 05:59:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:00.265 05:59:08 -- common/autotest_common.sh@889 -- # local i 00:07:00.265 05:59:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:00.265 05:59:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:00.265 05:59:08 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:07:00.265 05:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.265 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.265 05:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.265 05:59:08 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:07:00.265 05:59:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:00.265 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:07:00.265 [ 00:07:00.265 { 00:07:00.265 "name": "Malloc_STAT", 00:07:00.265 "aliases": [ 00:07:00.265 "e9a622cc-10ed-11ef-ba60-3508ead7bdda" 00:07:00.265 ], 00:07:00.265 "product_name": "Malloc disk", 00:07:00.265 "block_size": 512, 00:07:00.265 "num_blocks": 262144, 00:07:00.265 "uuid": "e9a622cc-10ed-11ef-ba60-3508ead7bdda", 00:07:00.265 "assigned_rate_limits": { 00:07:00.265 "rw_ios_per_sec": 0, 00:07:00.265 "rw_mbytes_per_sec": 0, 00:07:00.265 "r_mbytes_per_sec": 0, 00:07:00.265 "w_mbytes_per_sec": 0 00:07:00.265 }, 00:07:00.265 "claimed": false, 00:07:00.265 "zoned": false, 00:07:00.265 "supported_io_types": { 00:07:00.265 "read": true, 00:07:00.265 "write": true, 00:07:00.265 "unmap": true, 00:07:00.265 "write_zeroes": true, 00:07:00.265 "flush": true, 00:07:00.265 "reset": true, 00:07:00.265 "compare": false, 00:07:00.265 "compare_and_write": false, 00:07:00.265 "abort": true, 00:07:00.265 "nvme_admin": false, 00:07:00.265 "nvme_io": false 00:07:00.265 }, 00:07:00.265 "memory_domains": [ 00:07:00.265 { 00:07:00.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.265 "dma_device_type": 2 00:07:00.265 } 00:07:00.265 ], 00:07:00.265 "driver_specific": {} 00:07:00.265 } 00:07:00.265 ] 00:07:00.265 05:59:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:00.265 05:59:08 -- common/autotest_common.sh@895 -- # return 0 00:07:00.265 05:59:08 -- bdev/blockdev.sh@603 -- # sleep 2 00:07:00.265 05:59:08 -- bdev/blockdev.sh@602 -- # /usr/home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:00.526 Running I/O for 10 seconds... 00:07:02.433 05:59:10 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:07:02.433 05:59:10 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:07:02.433 05:59:10 -- bdev/blockdev.sh@558 -- # local iostats 00:07:02.433 05:59:10 -- bdev/blockdev.sh@559 -- # local io_count1 00:07:02.433 05:59:10 -- bdev/blockdev.sh@560 -- # local io_count2 00:07:02.433 05:59:10 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:07:02.433 05:59:10 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:07:02.433 05:59:10 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:07:02.433 05:59:10 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:07:02.433 05:59:10 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:02.433 05:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:02.433 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.433 05:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:02.433 05:59:10 -- bdev/blockdev.sh@566 -- # iostats='{ 00:07:02.433 "tick_rate": 2294600415, 00:07:02.433 "ticks": 754764277874, 00:07:02.433 "bdevs": [ 00:07:02.433 { 00:07:02.433 "name": "Malloc_STAT", 00:07:02.433 "bytes_read": 16366211584, 00:07:02.433 "num_read_ops": 3995651, 00:07:02.433 "bytes_written": 0, 00:07:02.433 "num_write_ops": 0, 00:07:02.433 "bytes_unmapped": 0, 00:07:02.433 "num_unmap_ops": 0, 00:07:02.433 "bytes_copied": 0, 00:07:02.433 "num_copy_ops": 0, 00:07:02.433 "read_latency_ticks": 2404284739482, 00:07:02.433 "max_read_latency_ticks": 821446, 00:07:02.433 "min_read_latency_ticks": 29148, 00:07:02.433 "write_latency_ticks": 0, 00:07:02.433 "max_write_latency_ticks": 0, 00:07:02.433 "min_write_latency_ticks": 0, 00:07:02.433 "unmap_latency_ticks": 0, 00:07:02.433 "max_unmap_latency_ticks": 0, 00:07:02.433 "min_unmap_latency_ticks": 0, 00:07:02.433 "copy_latency_ticks": 0, 00:07:02.433 "max_copy_latency_ticks": 0, 00:07:02.433 "min_copy_latency_ticks": 0, 00:07:02.433 "io_error": {} 00:07:02.433 } 00:07:02.433 ] 00:07:02.433 }' 00:07:02.433 05:59:10 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:07:02.433 05:59:10 -- bdev/blockdev.sh@567 -- # io_count1=3995651 00:07:02.433 05:59:10 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:07:02.433 05:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:02.433 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 05:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:02.693 05:59:10 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:07:02.693 "tick_rate": 2294600415, 00:07:02.693 "ticks": 754840498580, 00:07:02.693 "name": "Malloc_STAT", 00:07:02.693 "channels": [ 00:07:02.693 { 00:07:02.693 "thread_id": 2, 00:07:02.693 "bytes_read": 8304721920, 00:07:02.693 "num_read_ops": 2027520, 00:07:02.693 "bytes_written": 0, 00:07:02.693 "num_write_ops": 0, 00:07:02.693 "bytes_unmapped": 0, 00:07:02.693 "num_unmap_ops": 0, 00:07:02.693 "bytes_copied": 0, 00:07:02.693 "num_copy_ops": 0, 00:07:02.693 "read_latency_ticks": 1221561518972, 00:07:02.693 "max_read_latency_ticks": 806780, 00:07:02.693 "min_read_latency_ticks": 569888, 00:07:02.693 "write_latency_ticks": 0, 00:07:02.693 "max_write_latency_ticks": 0, 00:07:02.693 "min_write_latency_ticks": 0, 00:07:02.693 "unmap_latency_ticks": 0, 00:07:02.693 "max_unmap_latency_ticks": 0, 00:07:02.693 "min_unmap_latency_ticks": 0, 00:07:02.693 "copy_latency_ticks": 0, 00:07:02.693 "max_copy_latency_ticks": 0, 00:07:02.693 "min_copy_latency_ticks": 0 00:07:02.693 }, 00:07:02.693 { 00:07:02.693 "thread_id": 3, 00:07:02.693 "bytes_read": 8315207680, 00:07:02.693 "num_read_ops": 2030080, 00:07:02.693 "bytes_written": 0, 00:07:02.693 "num_write_ops": 0, 00:07:02.693 "bytes_unmapped": 0, 00:07:02.693 "num_unmap_ops": 0, 00:07:02.693 "bytes_copied": 0, 00:07:02.693 "num_copy_ops": 0, 00:07:02.693 "read_latency_ticks": 1221632376302, 00:07:02.693 "max_read_latency_ticks": 821446, 00:07:02.693 "min_read_latency_ticks": 568858, 00:07:02.693 "write_latency_ticks": 0, 00:07:02.693 "max_write_latency_ticks": 0, 00:07:02.693 "min_write_latency_ticks": 0, 00:07:02.693 "unmap_latency_ticks": 0, 00:07:02.693 "max_unmap_latency_ticks": 0, 00:07:02.693 "min_unmap_latency_ticks": 0, 00:07:02.693 "copy_latency_ticks": 0, 00:07:02.693 "max_copy_latency_ticks": 0, 00:07:02.693 "min_copy_latency_ticks": 0 00:07:02.693 } 00:07:02.693 ] 00:07:02.693 }' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=2027520 00:07:02.693 05:59:10 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=2027520 00:07:02.693 05:59:10 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=2030080 00:07:02.693 05:59:10 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=4057600 00:07:02.693 05:59:10 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:02.693 05:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:02.693 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 05:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:02.693 05:59:10 -- bdev/blockdev.sh@575 -- # iostats='{ 00:07:02.693 "tick_rate": 2294600415, 00:07:02.693 "ticks": 754952646856, 00:07:02.693 "bdevs": [ 00:07:02.693 { 00:07:02.693 "name": "Malloc_STAT", 00:07:02.693 "bytes_read": 17000600064, 00:07:02.693 "num_read_ops": 4150531, 00:07:02.693 "bytes_written": 0, 00:07:02.693 "num_write_ops": 0, 00:07:02.693 "bytes_unmapped": 0, 00:07:02.693 "num_unmap_ops": 0, 00:07:02.693 "bytes_copied": 0, 00:07:02.693 "num_copy_ops": 0, 00:07:02.693 "read_latency_ticks": 2500634262394, 00:07:02.693 "max_read_latency_ticks": 821446, 00:07:02.693 "min_read_latency_ticks": 29148, 00:07:02.693 "write_latency_ticks": 0, 00:07:02.693 "max_write_latency_ticks": 0, 00:07:02.693 "min_write_latency_ticks": 0, 00:07:02.693 "unmap_latency_ticks": 0, 00:07:02.693 "max_unmap_latency_ticks": 0, 00:07:02.693 "min_unmap_latency_ticks": 0, 00:07:02.693 "copy_latency_ticks": 0, 00:07:02.693 "max_copy_latency_ticks": 0, 00:07:02.693 "min_copy_latency_ticks": 0, 00:07:02.693 "io_error": {} 00:07:02.693 } 00:07:02.693 ] 00:07:02.693 }' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@576 -- # io_count2=4150531 00:07:02.693 05:59:10 -- bdev/blockdev.sh@581 -- # '[' 4057600 -lt 3995651 ']' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@581 -- # '[' 4057600 -gt 4150531 ']' 00:07:02.693 05:59:10 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:07:02.693 05:59:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:02.693 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 00:07:02.693 Latency(us) 00:07:02.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.693 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:02.693 Malloc_STAT : 2.16 973435.34 3802.48 0.00 0.00 262.80 43.73 351.66 00:07:02.693 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:02.693 Malloc_STAT : 2.16 974705.81 3807.44 0.00 0.00 262.45 48.87 358.80 00:07:02.693 =================================================================================================================== 00:07:02.693 Total : 1948141.14 7609.93 0.00 0.00 262.63 43.73 358.80 00:07:02.693 0 00:07:02.693 05:59:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:02.693 05:59:10 -- bdev/blockdev.sh@607 -- # killprocess 47470 00:07:02.693 05:59:10 -- common/autotest_common.sh@926 -- # '[' -z 47470 ']' 00:07:02.693 05:59:10 -- common/autotest_common.sh@930 -- # kill -0 47470 00:07:02.693 05:59:10 -- common/autotest_common.sh@931 -- # uname 00:07:02.693 05:59:10 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:02.693 05:59:10 -- common/autotest_common.sh@934 -- # ps -c -o command 47470 00:07:02.693 05:59:10 -- common/autotest_common.sh@934 -- # tail -1 00:07:02.693 05:59:10 -- common/autotest_common.sh@934 -- # process_name=bdevperf 00:07:02.693 05:59:10 -- common/autotest_common.sh@936 -- # '[' bdevperf = sudo ']' 00:07:02.693 killing process with pid 47470 00:07:02.693 05:59:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47470' 00:07:02.693 05:59:10 -- common/autotest_common.sh@945 -- # kill 47470 00:07:02.693 Received shutdown signal, test time was about 2.199964 seconds 00:07:02.693 00:07:02.693 Latency(us) 00:07:02.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.693 =================================================================================================================== 00:07:02.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:02.693 05:59:10 -- common/autotest_common.sh@950 -- # wait 47470 00:07:02.693 05:59:10 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:07:02.693 00:07:02.693 real 0m3.388s 00:07:02.693 user 0m6.185s 00:07:02.693 sys 0m0.575s 00:07:02.693 05:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.693 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:07:02.693 ************************************ 00:07:02.693 END TEST bdev_stat 00:07:02.693 ************************************ 00:07:02.952 05:59:11 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:07:02.952 05:59:11 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:07:02.952 05:59:11 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:07:02.952 05:59:11 -- bdev/blockdev.sh@809 -- # cleanup 00:07:02.952 05:59:11 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:02.952 05:59:11 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.952 05:59:11 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:07:02.952 05:59:11 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:07:02.952 05:59:11 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:07:02.952 05:59:11 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:07:02.952 00:07:02.952 real 1m31.718s 00:07:02.952 user 4m29.379s 00:07:02.952 sys 0m27.402s 00:07:02.952 05:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.952 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.952 ************************************ 00:07:02.952 END TEST blockdev_general 00:07:02.952 ************************************ 00:07:02.952 05:59:11 -- spdk/autotest.sh@196 -- # run_test bdev_raid /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:02.952 05:59:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:02.952 05:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.952 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:02.952 ************************************ 00:07:02.952 START TEST bdev_raid 00:07:02.952 ************************************ 00:07:02.952 05:59:11 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:03.211 * Looking for test storage... 00:07:03.212 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@12 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.212 05:59:11 -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@14 -- # rpc_py='/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@716 -- # uname -s 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@716 -- # '[' FreeBSD = Linux ']' 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:07:03.212 05:59:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:03.212 05:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.212 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.212 ************************************ 00:07:03.212 START TEST raid0_resize_test 00:07:03.212 ************************************ 00:07:03.212 05:59:11 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@301 -- # raid_pid=47557 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 47557' 00:07:03.212 Process raid pid: 47557 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@300 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:03.212 05:59:11 -- bdev/bdev_raid.sh@303 -- # waitforlisten 47557 /var/tmp/spdk-raid.sock 00:07:03.212 05:59:11 -- common/autotest_common.sh@819 -- # '[' -z 47557 ']' 00:07:03.212 05:59:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:03.212 05:59:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:03.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:03.212 05:59:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:03.212 05:59:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:03.212 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.212 [2024-05-13 05:59:11.325551] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:03.212 [2024-05-13 05:59:11.325898] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:03.471 EAL: TSC is not safe to use in SMP mode 00:07:03.471 EAL: TSC is not invariant 00:07:03.471 [2024-05-13 05:59:11.755791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.730 [2024-05-13 05:59:11.843699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.730 [2024-05-13 05:59:11.844134] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.730 [2024-05-13 05:59:11.844143] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.989 05:59:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:03.989 05:59:12 -- common/autotest_common.sh@852 -- # return 0 00:07:03.989 05:59:12 -- bdev/bdev_raid.sh@305 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:07:04.248 Base_1 00:07:04.248 05:59:12 -- bdev/bdev_raid.sh@306 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:07:04.508 Base_2 00:07:04.508 05:59:12 -- bdev/bdev_raid.sh@308 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:07:04.508 [2024-05-13 05:59:12.747030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:04.508 [2024-05-13 05:59:12.747387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:04.508 [2024-05-13 05:59:12.747412] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c9a8a00 00:07:04.508 [2024-05-13 05:59:12.747415] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:04.508 [2024-05-13 05:59:12.747441] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ca0be20 00:07:04.508 [2024-05-13 05:59:12.747479] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c9a8a00 00:07:04.508 [2024-05-13 05:59:12.747482] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x82c9a8a00 00:07:04.508 [2024-05-13 05:59:12.747504] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.508 05:59:12 -- bdev/bdev_raid.sh@311 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:04.767 [2024-05-13 05:59:12.943015] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:04.767 [2024-05-13 05:59:12.943029] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:04.767 true 00:07:04.767 05:59:12 -- bdev/bdev_raid.sh@314 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:04.767 05:59:12 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:07:05.026 [2024-05-13 05:59:13.131025] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.026 05:59:13 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:07:05.026 05:59:13 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:07:05.026 05:59:13 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:07:05.026 05:59:13 -- bdev/bdev_raid.sh@322 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:05.285 [2024-05-13 05:59:13.323023] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:05.285 [2024-05-13 05:59:13.323041] bdev_raid.c:2083:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:05.285 [2024-05-13 05:59:13.323062] raid0.c: 405:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:07:05.285 [2024-05-13 05:59:13.323070] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:05.285 true 00:07:05.285 05:59:13 -- bdev/bdev_raid.sh@325 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:05.285 05:59:13 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:07:05.285 [2024-05-13 05:59:13.507023] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.285 05:59:13 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:07:05.285 05:59:13 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:07:05.285 05:59:13 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:07:05.285 05:59:13 -- bdev/bdev_raid.sh@332 -- # killprocess 47557 00:07:05.285 05:59:13 -- common/autotest_common.sh@926 -- # '[' -z 47557 ']' 00:07:05.285 05:59:13 -- common/autotest_common.sh@930 -- # kill -0 47557 00:07:05.285 05:59:13 -- common/autotest_common.sh@931 -- # uname 00:07:05.285 05:59:13 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:05.285 05:59:13 -- common/autotest_common.sh@934 -- # ps -c -o command 47557 00:07:05.285 05:59:13 -- common/autotest_common.sh@934 -- # tail -1 00:07:05.285 05:59:13 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:05.285 05:59:13 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:05.285 killing process with pid 47557 00:07:05.285 05:59:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47557' 00:07:05.285 05:59:13 -- common/autotest_common.sh@945 -- # kill 47557 00:07:05.285 [2024-05-13 05:59:13.540099] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:05.285 [2024-05-13 05:59:13.540112] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.285 [2024-05-13 05:59:13.540130] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:05.285 [2024-05-13 05:59:13.540133] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c9a8a00 name Raid, state offline 00:07:05.285 [2024-05-13 05:59:13.540250] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.285 05:59:13 -- common/autotest_common.sh@950 -- # wait 47557 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@334 -- # return 0 00:07:05.545 00:07:05.545 real 0m2.370s 00:07:05.545 user 0m3.390s 00:07:05.545 sys 0m0.640s 00:07:05.545 05:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.545 05:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:05.545 ************************************ 00:07:05.545 END TEST raid0_resize_test 00:07:05.545 ************************************ 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:05.545 05:59:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:05.545 05:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.545 05:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:05.545 ************************************ 00:07:05.545 START TEST raid_state_function_test 00:07:05.545 ************************************ 00:07:05.545 05:59:13 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@226 -- # raid_pid=47595 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47595' 00:07:05.545 Process raid pid: 47595 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47595 /var/tmp/spdk-raid.sock 00:07:05.545 05:59:13 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:05.545 05:59:13 -- common/autotest_common.sh@819 -- # '[' -z 47595 ']' 00:07:05.545 05:59:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:05.545 05:59:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:05.545 05:59:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:05.545 05:59:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.545 05:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:05.545 [2024-05-13 05:59:13.754457] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:05.545 [2024-05-13 05:59:13.754762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:06.484 EAL: TSC is not safe to use in SMP mode 00:07:06.484 EAL: TSC is not invariant 00:07:06.484 [2024-05-13 05:59:14.485409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.484 [2024-05-13 05:59:14.580388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.484 [2024-05-13 05:59:14.580799] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.484 [2024-05-13 05:59:14.580808] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.484 05:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.484 05:59:14 -- common/autotest_common.sh@852 -- # return 0 00:07:06.484 05:59:14 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:06.743 [2024-05-13 05:59:14.799750] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.743 [2024-05-13 05:59:14.799786] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.743 [2024-05-13 05:59:14.799789] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.743 [2024-05-13 05:59:14.799795] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:06.743 "name": "Existed_Raid", 00:07:06.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.743 "strip_size_kb": 64, 00:07:06.743 "state": "configuring", 00:07:06.743 "raid_level": "raid0", 00:07:06.743 "superblock": false, 00:07:06.743 "num_base_bdevs": 2, 00:07:06.743 "num_base_bdevs_discovered": 0, 00:07:06.743 "num_base_bdevs_operational": 2, 00:07:06.743 "base_bdevs_list": [ 00:07:06.743 { 00:07:06.743 "name": "BaseBdev1", 00:07:06.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.743 "is_configured": false, 00:07:06.743 "data_offset": 0, 00:07:06.743 "data_size": 0 00:07:06.743 }, 00:07:06.743 { 00:07:06.743 "name": "BaseBdev2", 00:07:06.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.743 "is_configured": false, 00:07:06.743 "data_offset": 0, 00:07:06.743 "data_size": 0 00:07:06.743 } 00:07:06.743 ] 00:07:06.743 }' 00:07:06.743 05:59:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:06.743 05:59:14 -- common/autotest_common.sh@10 -- # set +x 00:07:07.003 05:59:15 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:07.266 [2024-05-13 05:59:15.427750] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.266 [2024-05-13 05:59:15.427767] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ae09500 name Existed_Raid, state configuring 00:07:07.266 05:59:15 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:07.534 [2024-05-13 05:59:15.611752] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:07.534 [2024-05-13 05:59:15.611779] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:07.534 [2024-05-13 05:59:15.611782] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.534 [2024-05-13 05:59:15.611787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.534 05:59:15 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:07.534 [2024-05-13 05:59:15.796523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.534 BaseBdev1 00:07:07.534 05:59:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:07.534 05:59:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:07.534 05:59:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:07.534 05:59:15 -- common/autotest_common.sh@889 -- # local i 00:07:07.534 05:59:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:07.534 05:59:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:07.534 05:59:15 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:07.801 05:59:15 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.061 [ 00:07:08.061 { 00:07:08.061 "name": "BaseBdev1", 00:07:08.061 "aliases": [ 00:07:08.061 "edfea81f-10ed-11ef-ba60-3508ead7bdda" 00:07:08.061 ], 00:07:08.061 "product_name": "Malloc disk", 00:07:08.061 "block_size": 512, 00:07:08.061 "num_blocks": 65536, 00:07:08.061 "uuid": "edfea81f-10ed-11ef-ba60-3508ead7bdda", 00:07:08.061 "assigned_rate_limits": { 00:07:08.061 "rw_ios_per_sec": 0, 00:07:08.061 "rw_mbytes_per_sec": 0, 00:07:08.061 "r_mbytes_per_sec": 0, 00:07:08.061 "w_mbytes_per_sec": 0 00:07:08.061 }, 00:07:08.061 "claimed": true, 00:07:08.061 "claim_type": "exclusive_write", 00:07:08.061 "zoned": false, 00:07:08.061 "supported_io_types": { 00:07:08.061 "read": true, 00:07:08.061 "write": true, 00:07:08.061 "unmap": true, 00:07:08.061 "write_zeroes": true, 00:07:08.061 "flush": true, 00:07:08.061 "reset": true, 00:07:08.061 "compare": false, 00:07:08.061 "compare_and_write": false, 00:07:08.061 "abort": true, 00:07:08.061 "nvme_admin": false, 00:07:08.061 "nvme_io": false 00:07:08.061 }, 00:07:08.061 "memory_domains": [ 00:07:08.061 { 00:07:08.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.061 "dma_device_type": 2 00:07:08.061 } 00:07:08.061 ], 00:07:08.061 "driver_specific": {} 00:07:08.061 } 00:07:08.061 ] 00:07:08.061 05:59:16 -- common/autotest_common.sh@895 -- # return 0 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.061 05:59:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.321 05:59:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:08.321 "name": "Existed_Raid", 00:07:08.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.321 "strip_size_kb": 64, 00:07:08.321 "state": "configuring", 00:07:08.321 "raid_level": "raid0", 00:07:08.321 "superblock": false, 00:07:08.321 "num_base_bdevs": 2, 00:07:08.321 "num_base_bdevs_discovered": 1, 00:07:08.321 "num_base_bdevs_operational": 2, 00:07:08.321 "base_bdevs_list": [ 00:07:08.321 { 00:07:08.321 "name": "BaseBdev1", 00:07:08.321 "uuid": "edfea81f-10ed-11ef-ba60-3508ead7bdda", 00:07:08.321 "is_configured": true, 00:07:08.321 "data_offset": 0, 00:07:08.321 "data_size": 65536 00:07:08.321 }, 00:07:08.321 { 00:07:08.321 "name": "BaseBdev2", 00:07:08.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.321 "is_configured": false, 00:07:08.321 "data_offset": 0, 00:07:08.321 "data_size": 0 00:07:08.321 } 00:07:08.321 ] 00:07:08.321 }' 00:07:08.321 05:59:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:08.321 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.581 05:59:16 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:08.581 [2024-05-13 05:59:16.823767] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.581 [2024-05-13 05:59:16.823784] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ae09500 name Existed_Raid, state configuring 00:07:08.581 05:59:16 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:08.581 05:59:16 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:08.841 [2024-05-13 05:59:16.995778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.841 [2024-05-13 05:59:16.996366] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.841 [2024-05-13 05:59:16.996400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.841 05:59:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.101 05:59:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:09.101 "name": "Existed_Raid", 00:07:09.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.101 "strip_size_kb": 64, 00:07:09.101 "state": "configuring", 00:07:09.101 "raid_level": "raid0", 00:07:09.101 "superblock": false, 00:07:09.101 "num_base_bdevs": 2, 00:07:09.101 "num_base_bdevs_discovered": 1, 00:07:09.101 "num_base_bdevs_operational": 2, 00:07:09.101 "base_bdevs_list": [ 00:07:09.101 { 00:07:09.101 "name": "BaseBdev1", 00:07:09.101 "uuid": "edfea81f-10ed-11ef-ba60-3508ead7bdda", 00:07:09.101 "is_configured": true, 00:07:09.101 "data_offset": 0, 00:07:09.101 "data_size": 65536 00:07:09.101 }, 00:07:09.101 { 00:07:09.101 "name": "BaseBdev2", 00:07:09.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.101 "is_configured": false, 00:07:09.101 "data_offset": 0, 00:07:09.101 "data_size": 0 00:07:09.101 } 00:07:09.101 ] 00:07:09.101 }' 00:07:09.101 05:59:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:09.101 05:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:09.361 05:59:17 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:09.361 [2024-05-13 05:59:17.627886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.361 [2024-05-13 05:59:17.627901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ae09a00 00:07:09.361 [2024-05-13 05:59:17.627904] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:09.361 [2024-05-13 05:59:17.627919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ae6cec0 00:07:09.361 [2024-05-13 05:59:17.627990] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ae09a00 00:07:09.361 [2024-05-13 05:59:17.627993] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ae09a00 00:07:09.361 [2024-05-13 05:59:17.628034] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.361 BaseBdev2 00:07:09.361 05:59:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:09.361 05:59:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:09.361 05:59:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:09.361 05:59:17 -- common/autotest_common.sh@889 -- # local i 00:07:09.361 05:59:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:09.361 05:59:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:09.361 05:59:17 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:09.620 05:59:17 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:09.880 [ 00:07:09.880 { 00:07:09.880 "name": "BaseBdev2", 00:07:09.880 "aliases": [ 00:07:09.880 "ef1633bb-10ed-11ef-ba60-3508ead7bdda" 00:07:09.880 ], 00:07:09.880 "product_name": "Malloc disk", 00:07:09.880 "block_size": 512, 00:07:09.880 "num_blocks": 65536, 00:07:09.880 "uuid": "ef1633bb-10ed-11ef-ba60-3508ead7bdda", 00:07:09.880 "assigned_rate_limits": { 00:07:09.880 "rw_ios_per_sec": 0, 00:07:09.880 "rw_mbytes_per_sec": 0, 00:07:09.880 "r_mbytes_per_sec": 0, 00:07:09.880 "w_mbytes_per_sec": 0 00:07:09.880 }, 00:07:09.880 "claimed": true, 00:07:09.880 "claim_type": "exclusive_write", 00:07:09.880 "zoned": false, 00:07:09.880 "supported_io_types": { 00:07:09.880 "read": true, 00:07:09.880 "write": true, 00:07:09.880 "unmap": true, 00:07:09.880 "write_zeroes": true, 00:07:09.880 "flush": true, 00:07:09.880 "reset": true, 00:07:09.880 "compare": false, 00:07:09.880 "compare_and_write": false, 00:07:09.880 "abort": true, 00:07:09.880 "nvme_admin": false, 00:07:09.880 "nvme_io": false 00:07:09.880 }, 00:07:09.880 "memory_domains": [ 00:07:09.880 { 00:07:09.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.880 "dma_device_type": 2 00:07:09.880 } 00:07:09.880 ], 00:07:09.880 "driver_specific": {} 00:07:09.880 } 00:07:09.880 ] 00:07:09.880 05:59:18 -- common/autotest_common.sh@895 -- # return 0 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:09.880 05:59:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.140 05:59:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:10.140 "name": "Existed_Raid", 00:07:10.140 "uuid": "ef163830-10ed-11ef-ba60-3508ead7bdda", 00:07:10.140 "strip_size_kb": 64, 00:07:10.140 "state": "online", 00:07:10.140 "raid_level": "raid0", 00:07:10.140 "superblock": false, 00:07:10.140 "num_base_bdevs": 2, 00:07:10.140 "num_base_bdevs_discovered": 2, 00:07:10.140 "num_base_bdevs_operational": 2, 00:07:10.140 "base_bdevs_list": [ 00:07:10.140 { 00:07:10.140 "name": "BaseBdev1", 00:07:10.140 "uuid": "edfea81f-10ed-11ef-ba60-3508ead7bdda", 00:07:10.140 "is_configured": true, 00:07:10.140 "data_offset": 0, 00:07:10.140 "data_size": 65536 00:07:10.140 }, 00:07:10.140 { 00:07:10.140 "name": "BaseBdev2", 00:07:10.140 "uuid": "ef1633bb-10ed-11ef-ba60-3508ead7bdda", 00:07:10.140 "is_configured": true, 00:07:10.140 "data_offset": 0, 00:07:10.140 "data_size": 65536 00:07:10.140 } 00:07:10.140 ] 00:07:10.140 }' 00:07:10.140 05:59:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:10.140 05:59:18 -- common/autotest_common.sh@10 -- # set +x 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:10.400 [2024-05-13 05:59:18.627813] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:10.400 [2024-05-13 05:59:18.627829] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.400 [2024-05-13 05:59:18.627838] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.400 05:59:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.660 05:59:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:10.660 "name": "Existed_Raid", 00:07:10.660 "uuid": "ef163830-10ed-11ef-ba60-3508ead7bdda", 00:07:10.660 "strip_size_kb": 64, 00:07:10.660 "state": "offline", 00:07:10.660 "raid_level": "raid0", 00:07:10.660 "superblock": false, 00:07:10.660 "num_base_bdevs": 2, 00:07:10.660 "num_base_bdevs_discovered": 1, 00:07:10.660 "num_base_bdevs_operational": 1, 00:07:10.660 "base_bdevs_list": [ 00:07:10.660 { 00:07:10.660 "name": null, 00:07:10.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.660 "is_configured": false, 00:07:10.660 "data_offset": 0, 00:07:10.660 "data_size": 65536 00:07:10.660 }, 00:07:10.660 { 00:07:10.660 "name": "BaseBdev2", 00:07:10.660 "uuid": "ef1633bb-10ed-11ef-ba60-3508ead7bdda", 00:07:10.660 "is_configured": true, 00:07:10.660 "data_offset": 0, 00:07:10.660 "data_size": 65536 00:07:10.660 } 00:07:10.660 ] 00:07:10.660 }' 00:07:10.660 05:59:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:10.660 05:59:18 -- common/autotest_common.sh@10 -- # set +x 00:07:10.920 05:59:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:10.920 05:59:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:10.920 05:59:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:10.920 05:59:19 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:11.180 [2024-05-13 05:59:19.456477] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:11.180 [2024-05-13 05:59:19.456493] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ae09a00 name Existed_Raid, state offline 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.180 05:59:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.440 05:59:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:11.440 05:59:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:11.440 05:59:19 -- bdev/bdev_raid.sh@287 -- # killprocess 47595 00:07:11.440 05:59:19 -- common/autotest_common.sh@926 -- # '[' -z 47595 ']' 00:07:11.440 05:59:19 -- common/autotest_common.sh@930 -- # kill -0 47595 00:07:11.440 05:59:19 -- common/autotest_common.sh@931 -- # uname 00:07:11.440 05:59:19 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:11.440 05:59:19 -- common/autotest_common.sh@934 -- # ps -c -o command 47595 00:07:11.440 05:59:19 -- common/autotest_common.sh@934 -- # tail -1 00:07:11.440 05:59:19 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:11.440 05:59:19 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:11.440 killing process with pid 47595 00:07:11.440 05:59:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47595' 00:07:11.440 05:59:19 -- common/autotest_common.sh@945 -- # kill 47595 00:07:11.440 [2024-05-13 05:59:19.679738] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.440 [2024-05-13 05:59:19.679768] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.440 05:59:19 -- common/autotest_common.sh@950 -- # wait 47595 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:11.700 00:07:11.700 real 0m6.088s 00:07:11.700 user 0m9.876s 00:07:11.700 sys 0m1.633s 00:07:11.700 05:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.700 05:59:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.700 ************************************ 00:07:11.700 END TEST raid_state_function_test 00:07:11.700 ************************************ 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:11.700 05:59:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:11.700 05:59:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.700 05:59:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.700 ************************************ 00:07:11.700 START TEST raid_state_function_test_sb 00:07:11.700 ************************************ 00:07:11.700 05:59:19 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=47791 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47791' 00:07:11.700 Process raid pid: 47791 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:11.700 05:59:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47791 /var/tmp/spdk-raid.sock 00:07:11.700 05:59:19 -- common/autotest_common.sh@819 -- # '[' -z 47791 ']' 00:07:11.700 05:59:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:11.700 05:59:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:11.700 05:59:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:11.700 05:59:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.700 05:59:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.700 [2024-05-13 05:59:19.899887] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:11.700 [2024-05-13 05:59:19.900132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:12.270 EAL: TSC is not safe to use in SMP mode 00:07:12.270 EAL: TSC is not invariant 00:07:12.270 [2024-05-13 05:59:20.490555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.530 [2024-05-13 05:59:20.605949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.530 [2024-05-13 05:59:20.606366] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.530 [2024-05-13 05:59:20.606375] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.530 05:59:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:12.530 05:59:20 -- common/autotest_common.sh@852 -- # return 0 00:07:12.530 05:59:20 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:12.789 [2024-05-13 05:59:20.928261] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.789 [2024-05-13 05:59:20.928321] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.789 [2024-05-13 05:59:20.928325] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.789 [2024-05-13 05:59:20.928332] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:12.789 05:59:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.048 05:59:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:13.048 "name": "Existed_Raid", 00:07:13.048 "uuid": "f10dcff4-10ed-11ef-ba60-3508ead7bdda", 00:07:13.048 "strip_size_kb": 64, 00:07:13.048 "state": "configuring", 00:07:13.048 "raid_level": "raid0", 00:07:13.048 "superblock": true, 00:07:13.048 "num_base_bdevs": 2, 00:07:13.048 "num_base_bdevs_discovered": 0, 00:07:13.048 "num_base_bdevs_operational": 2, 00:07:13.048 "base_bdevs_list": [ 00:07:13.048 { 00:07:13.048 "name": "BaseBdev1", 00:07:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.048 "is_configured": false, 00:07:13.048 "data_offset": 0, 00:07:13.048 "data_size": 0 00:07:13.048 }, 00:07:13.048 { 00:07:13.048 "name": "BaseBdev2", 00:07:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.048 "is_configured": false, 00:07:13.048 "data_offset": 0, 00:07:13.048 "data_size": 0 00:07:13.048 } 00:07:13.048 ] 00:07:13.048 }' 00:07:13.048 05:59:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:13.048 05:59:21 -- common/autotest_common.sh@10 -- # set +x 00:07:13.308 05:59:21 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:13.308 [2024-05-13 05:59:21.568278] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.308 [2024-05-13 05:59:21.568307] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b652500 name Existed_Raid, state configuring 00:07:13.308 05:59:21 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:13.568 [2024-05-13 05:59:21.752311] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:13.568 [2024-05-13 05:59:21.752384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:13.568 [2024-05-13 05:59:21.752388] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.568 [2024-05-13 05:59:21.752395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.568 05:59:21 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:13.828 [2024-05-13 05:59:21.933418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.828 BaseBdev1 00:07:13.828 05:59:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:13.828 05:59:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:13.828 05:59:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:13.828 05:59:21 -- common/autotest_common.sh@889 -- # local i 00:07:13.828 05:59:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:13.828 05:59:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:13.828 05:59:21 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:14.088 05:59:22 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.088 [ 00:07:14.088 { 00:07:14.088 "name": "BaseBdev1", 00:07:14.088 "aliases": [ 00:07:14.088 "f1a70421-10ed-11ef-ba60-3508ead7bdda" 00:07:14.088 ], 00:07:14.088 "product_name": "Malloc disk", 00:07:14.088 "block_size": 512, 00:07:14.088 "num_blocks": 65536, 00:07:14.088 "uuid": "f1a70421-10ed-11ef-ba60-3508ead7bdda", 00:07:14.088 "assigned_rate_limits": { 00:07:14.088 "rw_ios_per_sec": 0, 00:07:14.088 "rw_mbytes_per_sec": 0, 00:07:14.088 "r_mbytes_per_sec": 0, 00:07:14.088 "w_mbytes_per_sec": 0 00:07:14.088 }, 00:07:14.088 "claimed": true, 00:07:14.088 "claim_type": "exclusive_write", 00:07:14.088 "zoned": false, 00:07:14.088 "supported_io_types": { 00:07:14.088 "read": true, 00:07:14.088 "write": true, 00:07:14.088 "unmap": true, 00:07:14.088 "write_zeroes": true, 00:07:14.088 "flush": true, 00:07:14.088 "reset": true, 00:07:14.088 "compare": false, 00:07:14.088 "compare_and_write": false, 00:07:14.088 "abort": true, 00:07:14.088 "nvme_admin": false, 00:07:14.088 "nvme_io": false 00:07:14.088 }, 00:07:14.088 "memory_domains": [ 00:07:14.088 { 00:07:14.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.088 "dma_device_type": 2 00:07:14.088 } 00:07:14.088 ], 00:07:14.088 "driver_specific": {} 00:07:14.088 } 00:07:14.088 ] 00:07:14.088 05:59:22 -- common/autotest_common.sh@895 -- # return 0 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:14.088 05:59:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.347 05:59:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:14.347 "name": "Existed_Raid", 00:07:14.347 "uuid": "f18b8d59-10ed-11ef-ba60-3508ead7bdda", 00:07:14.347 "strip_size_kb": 64, 00:07:14.347 "state": "configuring", 00:07:14.347 "raid_level": "raid0", 00:07:14.347 "superblock": true, 00:07:14.347 "num_base_bdevs": 2, 00:07:14.347 "num_base_bdevs_discovered": 1, 00:07:14.347 "num_base_bdevs_operational": 2, 00:07:14.347 "base_bdevs_list": [ 00:07:14.347 { 00:07:14.347 "name": "BaseBdev1", 00:07:14.347 "uuid": "f1a70421-10ed-11ef-ba60-3508ead7bdda", 00:07:14.347 "is_configured": true, 00:07:14.347 "data_offset": 2048, 00:07:14.347 "data_size": 63488 00:07:14.347 }, 00:07:14.347 { 00:07:14.347 "name": "BaseBdev2", 00:07:14.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.347 "is_configured": false, 00:07:14.347 "data_offset": 0, 00:07:14.347 "data_size": 0 00:07:14.347 } 00:07:14.347 ] 00:07:14.347 }' 00:07:14.347 05:59:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:14.347 05:59:22 -- common/autotest_common.sh@10 -- # set +x 00:07:14.606 05:59:22 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:14.866 [2024-05-13 05:59:22.952279] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.866 [2024-05-13 05:59:22.952311] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b652500 name Existed_Raid, state configuring 00:07:14.866 05:59:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:14.866 05:59:22 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:15.133 05:59:23 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:15.133 BaseBdev1 00:07:15.133 05:59:23 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:15.133 05:59:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:15.133 05:59:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:15.133 05:59:23 -- common/autotest_common.sh@889 -- # local i 00:07:15.133 05:59:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:15.133 05:59:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:15.133 05:59:23 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:15.400 05:59:23 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:15.659 [ 00:07:15.659 { 00:07:15.659 "name": "BaseBdev1", 00:07:15.659 "aliases": [ 00:07:15.659 "f27c392f-10ed-11ef-ba60-3508ead7bdda" 00:07:15.659 ], 00:07:15.659 "product_name": "Malloc disk", 00:07:15.659 "block_size": 512, 00:07:15.659 "num_blocks": 65536, 00:07:15.659 "uuid": "f27c392f-10ed-11ef-ba60-3508ead7bdda", 00:07:15.659 "assigned_rate_limits": { 00:07:15.659 "rw_ios_per_sec": 0, 00:07:15.659 "rw_mbytes_per_sec": 0, 00:07:15.659 "r_mbytes_per_sec": 0, 00:07:15.659 "w_mbytes_per_sec": 0 00:07:15.659 }, 00:07:15.659 "claimed": false, 00:07:15.659 "zoned": false, 00:07:15.659 "supported_io_types": { 00:07:15.659 "read": true, 00:07:15.659 "write": true, 00:07:15.659 "unmap": true, 00:07:15.659 "write_zeroes": true, 00:07:15.659 "flush": true, 00:07:15.659 "reset": true, 00:07:15.659 "compare": false, 00:07:15.659 "compare_and_write": false, 00:07:15.659 "abort": true, 00:07:15.659 "nvme_admin": false, 00:07:15.659 "nvme_io": false 00:07:15.659 }, 00:07:15.659 "memory_domains": [ 00:07:15.659 { 00:07:15.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.659 "dma_device_type": 2 00:07:15.659 } 00:07:15.659 ], 00:07:15.659 "driver_specific": {} 00:07:15.659 } 00:07:15.659 ] 00:07:15.659 05:59:23 -- common/autotest_common.sh@895 -- # return 0 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:15.659 [2024-05-13 05:59:23.889624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:15.659 [2024-05-13 05:59:23.890337] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:15.659 [2024-05-13 05:59:23.890382] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:15.659 05:59:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.918 05:59:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:15.918 "name": "Existed_Raid", 00:07:15.918 "uuid": "f2d1ae1f-10ed-11ef-ba60-3508ead7bdda", 00:07:15.918 "strip_size_kb": 64, 00:07:15.918 "state": "configuring", 00:07:15.918 "raid_level": "raid0", 00:07:15.918 "superblock": true, 00:07:15.918 "num_base_bdevs": 2, 00:07:15.918 "num_base_bdevs_discovered": 1, 00:07:15.918 "num_base_bdevs_operational": 2, 00:07:15.918 "base_bdevs_list": [ 00:07:15.918 { 00:07:15.918 "name": "BaseBdev1", 00:07:15.918 "uuid": "f27c392f-10ed-11ef-ba60-3508ead7bdda", 00:07:15.918 "is_configured": true, 00:07:15.918 "data_offset": 2048, 00:07:15.918 "data_size": 63488 00:07:15.918 }, 00:07:15.918 { 00:07:15.918 "name": "BaseBdev2", 00:07:15.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:15.918 "is_configured": false, 00:07:15.918 "data_offset": 0, 00:07:15.918 "data_size": 0 00:07:15.918 } 00:07:15.918 ] 00:07:15.918 }' 00:07:15.918 05:59:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:15.918 05:59:24 -- common/autotest_common.sh@10 -- # set +x 00:07:16.177 05:59:24 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:16.438 [2024-05-13 05:59:24.517820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.438 [2024-05-13 05:59:24.517922] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b652a00 00:07:16.438 [2024-05-13 05:59:24.517927] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.438 [2024-05-13 05:59:24.517945] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b6b5ec0 00:07:16.438 [2024-05-13 05:59:24.517977] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b652a00 00:07:16.438 [2024-05-13 05:59:24.517981] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b652a00 00:07:16.438 [2024-05-13 05:59:24.517996] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.438 BaseBdev2 00:07:16.438 05:59:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:16.438 05:59:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:16.438 05:59:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:16.438 05:59:24 -- common/autotest_common.sh@889 -- # local i 00:07:16.438 05:59:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:16.438 05:59:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:16.438 05:59:24 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:16.438 05:59:24 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:16.698 [ 00:07:16.698 { 00:07:16.698 "name": "BaseBdev2", 00:07:16.698 "aliases": [ 00:07:16.698 "f33183c2-10ed-11ef-ba60-3508ead7bdda" 00:07:16.698 ], 00:07:16.698 "product_name": "Malloc disk", 00:07:16.698 "block_size": 512, 00:07:16.698 "num_blocks": 65536, 00:07:16.698 "uuid": "f33183c2-10ed-11ef-ba60-3508ead7bdda", 00:07:16.698 "assigned_rate_limits": { 00:07:16.698 "rw_ios_per_sec": 0, 00:07:16.698 "rw_mbytes_per_sec": 0, 00:07:16.698 "r_mbytes_per_sec": 0, 00:07:16.698 "w_mbytes_per_sec": 0 00:07:16.698 }, 00:07:16.698 "claimed": true, 00:07:16.698 "claim_type": "exclusive_write", 00:07:16.698 "zoned": false, 00:07:16.698 "supported_io_types": { 00:07:16.698 "read": true, 00:07:16.698 "write": true, 00:07:16.698 "unmap": true, 00:07:16.698 "write_zeroes": true, 00:07:16.698 "flush": true, 00:07:16.698 "reset": true, 00:07:16.698 "compare": false, 00:07:16.698 "compare_and_write": false, 00:07:16.698 "abort": true, 00:07:16.698 "nvme_admin": false, 00:07:16.698 "nvme_io": false 00:07:16.698 }, 00:07:16.698 "memory_domains": [ 00:07:16.698 { 00:07:16.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.698 "dma_device_type": 2 00:07:16.698 } 00:07:16.698 ], 00:07:16.698 "driver_specific": {} 00:07:16.698 } 00:07:16.698 ] 00:07:16.698 05:59:24 -- common/autotest_common.sh@895 -- # return 0 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.698 05:59:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.957 05:59:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:16.957 "name": "Existed_Raid", 00:07:16.957 "uuid": "f2d1ae1f-10ed-11ef-ba60-3508ead7bdda", 00:07:16.957 "strip_size_kb": 64, 00:07:16.957 "state": "online", 00:07:16.957 "raid_level": "raid0", 00:07:16.957 "superblock": true, 00:07:16.957 "num_base_bdevs": 2, 00:07:16.957 "num_base_bdevs_discovered": 2, 00:07:16.957 "num_base_bdevs_operational": 2, 00:07:16.957 "base_bdevs_list": [ 00:07:16.957 { 00:07:16.957 "name": "BaseBdev1", 00:07:16.957 "uuid": "f27c392f-10ed-11ef-ba60-3508ead7bdda", 00:07:16.957 "is_configured": true, 00:07:16.957 "data_offset": 2048, 00:07:16.957 "data_size": 63488 00:07:16.957 }, 00:07:16.957 { 00:07:16.957 "name": "BaseBdev2", 00:07:16.957 "uuid": "f33183c2-10ed-11ef-ba60-3508ead7bdda", 00:07:16.957 "is_configured": true, 00:07:16.957 "data_offset": 2048, 00:07:16.957 "data_size": 63488 00:07:16.957 } 00:07:16.957 ] 00:07:16.957 }' 00:07:16.957 05:59:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:16.957 05:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:17.217 05:59:25 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:17.476 [2024-05-13 05:59:25.549662] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:17.476 [2024-05-13 05:59:25.549685] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.476 [2024-05-13 05:59:25.549698] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:17.476 "name": "Existed_Raid", 00:07:17.476 "uuid": "f2d1ae1f-10ed-11ef-ba60-3508ead7bdda", 00:07:17.476 "strip_size_kb": 64, 00:07:17.476 "state": "offline", 00:07:17.476 "raid_level": "raid0", 00:07:17.476 "superblock": true, 00:07:17.476 "num_base_bdevs": 2, 00:07:17.476 "num_base_bdevs_discovered": 1, 00:07:17.476 "num_base_bdevs_operational": 1, 00:07:17.476 "base_bdevs_list": [ 00:07:17.476 { 00:07:17.476 "name": null, 00:07:17.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.476 "is_configured": false, 00:07:17.476 "data_offset": 2048, 00:07:17.476 "data_size": 63488 00:07:17.476 }, 00:07:17.476 { 00:07:17.476 "name": "BaseBdev2", 00:07:17.476 "uuid": "f33183c2-10ed-11ef-ba60-3508ead7bdda", 00:07:17.476 "is_configured": true, 00:07:17.476 "data_offset": 2048, 00:07:17.476 "data_size": 63488 00:07:17.476 } 00:07:17.476 ] 00:07:17.476 }' 00:07:17.476 05:59:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:17.476 05:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.045 05:59:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:18.045 05:59:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:18.045 05:59:26 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.045 05:59:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:18.046 05:59:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:18.046 05:59:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:18.046 05:59:26 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:18.305 [2024-05-13 05:59:26.426819] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:18.305 [2024-05-13 05:59:26.426886] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b652a00 name Existed_Raid, state offline 00:07:18.305 05:59:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:18.305 05:59:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:18.305 05:59:26 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.305 05:59:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:18.565 05:59:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:18.565 05:59:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:18.565 05:59:26 -- bdev/bdev_raid.sh@287 -- # killprocess 47791 00:07:18.565 05:59:26 -- common/autotest_common.sh@926 -- # '[' -z 47791 ']' 00:07:18.565 05:59:26 -- common/autotest_common.sh@930 -- # kill -0 47791 00:07:18.565 05:59:26 -- common/autotest_common.sh@931 -- # uname 00:07:18.565 05:59:26 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:18.565 05:59:26 -- common/autotest_common.sh@934 -- # ps -c -o command 47791 00:07:18.565 05:59:26 -- common/autotest_common.sh@934 -- # tail -1 00:07:18.565 05:59:26 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:18.565 05:59:26 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:18.565 killing process with pid 47791 00:07:18.565 05:59:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47791' 00:07:18.565 05:59:26 -- common/autotest_common.sh@945 -- # kill 47791 00:07:18.565 [2024-05-13 05:59:26.636691] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.565 [2024-05-13 05:59:26.636748] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.565 05:59:26 -- common/autotest_common.sh@950 -- # wait 47791 00:07:18.565 05:59:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:18.565 00:07:18.565 real 0m6.976s 00:07:18.565 user 0m11.612s 00:07:18.565 sys 0m1.548s 00:07:18.565 05:59:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.565 05:59:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.565 ************************************ 00:07:18.565 END TEST raid_state_function_test_sb 00:07:18.565 ************************************ 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:18.824 05:59:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:18.824 05:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.824 05:59:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.824 ************************************ 00:07:18.824 START TEST raid_superblock_test 00:07:18.824 ************************************ 00:07:18.824 05:59:26 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=47990 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 47990 /var/tmp/spdk-raid.sock 00:07:18.824 05:59:26 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:18.824 05:59:26 -- common/autotest_common.sh@819 -- # '[' -z 47990 ']' 00:07:18.824 05:59:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:18.824 05:59:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:18.824 05:59:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:18.824 05:59:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.824 05:59:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.824 [2024-05-13 05:59:26.919685] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:18.824 [2024-05-13 05:59:26.920041] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:19.084 EAL: TSC is not safe to use in SMP mode 00:07:19.084 EAL: TSC is not invariant 00:07:19.084 [2024-05-13 05:59:27.350105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.344 [2024-05-13 05:59:27.465916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.344 [2024-05-13 05:59:27.466341] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.344 [2024-05-13 05:59:27.466351] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.603 05:59:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:19.603 05:59:27 -- common/autotest_common.sh@852 -- # return 0 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:19.603 05:59:27 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:19.863 malloc1 00:07:19.863 05:59:28 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:20.123 [2024-05-13 05:59:28.167942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:20.123 [2024-05-13 05:59:28.167996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.123 [2024-05-13 05:59:28.168523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b4f0780 00:07:20.123 [2024-05-13 05:59:28.168547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.123 [2024-05-13 05:59:28.169353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.123 [2024-05-13 05:59:28.169383] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:20.123 pt1 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:20.123 malloc2 00:07:20.123 05:59:28 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:20.383 [2024-05-13 05:59:28.531939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:20.383 [2024-05-13 05:59:28.531973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.383 [2024-05-13 05:59:28.531993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b4f0c80 00:07:20.383 [2024-05-13 05:59:28.531999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.383 [2024-05-13 05:59:28.532300] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.383 [2024-05-13 05:59:28.532319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:20.383 pt2 00:07:20.383 05:59:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:20.383 05:59:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:20.383 05:59:28 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:20.643 [2024-05-13 05:59:28.699972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:20.643 [2024-05-13 05:59:28.700644] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:20.643 [2024-05-13 05:59:28.700713] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b4f0f00 00:07:20.643 [2024-05-13 05:59:28.700719] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:20.643 [2024-05-13 05:59:28.700751] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b553e20 00:07:20.643 [2024-05-13 05:59:28.700827] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b4f0f00 00:07:20.643 [2024-05-13 05:59:28.700830] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b4f0f00 00:07:20.643 [2024-05-13 05:59:28.700850] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:20.643 "name": "raid_bdev1", 00:07:20.643 "uuid": "f5afae5d-10ed-11ef-ba60-3508ead7bdda", 00:07:20.643 "strip_size_kb": 64, 00:07:20.643 "state": "online", 00:07:20.643 "raid_level": "raid0", 00:07:20.643 "superblock": true, 00:07:20.643 "num_base_bdevs": 2, 00:07:20.643 "num_base_bdevs_discovered": 2, 00:07:20.643 "num_base_bdevs_operational": 2, 00:07:20.643 "base_bdevs_list": [ 00:07:20.643 { 00:07:20.643 "name": "pt1", 00:07:20.643 "uuid": "abcc12ed-74d0-3559-a1c4-fb7cb6ea7690", 00:07:20.643 "is_configured": true, 00:07:20.643 "data_offset": 2048, 00:07:20.643 "data_size": 63488 00:07:20.643 }, 00:07:20.643 { 00:07:20.643 "name": "pt2", 00:07:20.643 "uuid": "aabd24cd-155c-1256-ba22-109a3a01152f", 00:07:20.643 "is_configured": true, 00:07:20.643 "data_offset": 2048, 00:07:20.643 "data_size": 63488 00:07:20.643 } 00:07:20.643 ] 00:07:20.643 }' 00:07:20.643 05:59:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:20.643 05:59:28 -- common/autotest_common.sh@10 -- # set +x 00:07:20.903 05:59:29 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:20.903 05:59:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:21.162 [2024-05-13 05:59:29.316019] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.162 05:59:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f5afae5d-10ed-11ef-ba60-3508ead7bdda 00:07:21.162 05:59:29 -- bdev/bdev_raid.sh@380 -- # '[' -z f5afae5d-10ed-11ef-ba60-3508ead7bdda ']' 00:07:21.162 05:59:29 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:21.422 [2024-05-13 05:59:29.499953] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:21.422 [2024-05-13 05:59:29.499970] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.422 [2024-05-13 05:59:29.499989] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.422 [2024-05-13 05:59:29.500002] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.422 [2024-05-13 05:59:29.500005] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b4f0f00 name raid_bdev1, state offline 00:07:21.422 05:59:29 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.422 05:59:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:21.422 05:59:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:21.422 05:59:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:21.422 05:59:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.422 05:59:29 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:21.681 05:59:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:21.681 05:59:29 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:21.940 05:59:30 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:21.940 05:59:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:22.200 05:59:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:22.200 05:59:30 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:22.200 05:59:30 -- common/autotest_common.sh@640 -- # local es=0 00:07:22.200 05:59:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:22.200 05:59:30 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.200 05:59:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:22.200 05:59:30 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.200 05:59:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:22.200 05:59:30 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.200 05:59:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:22.200 05:59:30 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.200 05:59:30 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:22.200 05:59:30 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:22.200 [2024-05-13 05:59:30.423974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:22.200 [2024-05-13 05:59:30.424696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:22.200 [2024-05-13 05:59:30.424721] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:22.200 [2024-05-13 05:59:30.424756] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:22.200 [2024-05-13 05:59:30.424763] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.200 [2024-05-13 05:59:30.424767] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b4f0c80 name raid_bdev1, state configuring 00:07:22.200 request: 00:07:22.200 { 00:07:22.200 "name": "raid_bdev1", 00:07:22.200 "raid_level": "raid0", 00:07:22.200 "base_bdevs": [ 00:07:22.200 "malloc1", 00:07:22.200 "malloc2" 00:07:22.200 ], 00:07:22.200 "superblock": false, 00:07:22.200 "strip_size_kb": 64, 00:07:22.200 "method": "bdev_raid_create", 00:07:22.200 "req_id": 1 00:07:22.200 } 00:07:22.200 Got JSON-RPC error response 00:07:22.200 response: 00:07:22.200 { 00:07:22.200 "code": -17, 00:07:22.200 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:22.200 } 00:07:22.200 05:59:30 -- common/autotest_common.sh@643 -- # es=1 00:07:22.200 05:59:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:22.200 05:59:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:22.200 05:59:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:22.200 05:59:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:22.200 05:59:30 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.460 05:59:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:22.460 05:59:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:22.460 05:59:30 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:22.719 [2024-05-13 05:59:30.799967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:22.719 [2024-05-13 05:59:30.800004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.719 [2024-05-13 05:59:30.800033] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b4f0780 00:07:22.719 [2024-05-13 05:59:30.800039] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.719 [2024-05-13 05:59:30.800431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.719 [2024-05-13 05:59:30.800455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:22.719 [2024-05-13 05:59:30.800472] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:22.719 [2024-05-13 05:59:30.800480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:22.719 pt1 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:22.719 05:59:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:22.720 05:59:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:22.720 05:59:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:22.720 05:59:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:22.720 05:59:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.720 05:59:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.979 05:59:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:22.979 "name": "raid_bdev1", 00:07:22.979 "uuid": "f5afae5d-10ed-11ef-ba60-3508ead7bdda", 00:07:22.979 "strip_size_kb": 64, 00:07:22.979 "state": "configuring", 00:07:22.979 "raid_level": "raid0", 00:07:22.979 "superblock": true, 00:07:22.980 "num_base_bdevs": 2, 00:07:22.980 "num_base_bdevs_discovered": 1, 00:07:22.980 "num_base_bdevs_operational": 2, 00:07:22.980 "base_bdevs_list": [ 00:07:22.980 { 00:07:22.980 "name": "pt1", 00:07:22.980 "uuid": "abcc12ed-74d0-3559-a1c4-fb7cb6ea7690", 00:07:22.980 "is_configured": true, 00:07:22.980 "data_offset": 2048, 00:07:22.980 "data_size": 63488 00:07:22.980 }, 00:07:22.980 { 00:07:22.980 "name": null, 00:07:22.980 "uuid": "aabd24cd-155c-1256-ba22-109a3a01152f", 00:07:22.980 "is_configured": false, 00:07:22.980 "data_offset": 2048, 00:07:22.980 "data_size": 63488 00:07:22.980 } 00:07:22.980 ] 00:07:22.980 }' 00:07:22.980 05:59:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:22.980 05:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:23.240 [2024-05-13 05:59:31.468023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:23.240 [2024-05-13 05:59:31.468084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.240 [2024-05-13 05:59:31.468118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82b4f0f00 00:07:23.240 [2024-05-13 05:59:31.468124] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.240 [2024-05-13 05:59:31.468252] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.240 [2024-05-13 05:59:31.468265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:23.240 [2024-05-13 05:59:31.468287] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:23.240 [2024-05-13 05:59:31.468293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.240 [2024-05-13 05:59:31.468317] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b4f1180 00:07:23.240 [2024-05-13 05:59:31.468320] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.240 [2024-05-13 05:59:31.468335] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b553e20 00:07:23.240 [2024-05-13 05:59:31.468378] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b4f1180 00:07:23.240 [2024-05-13 05:59:31.468382] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82b4f1180 00:07:23.240 [2024-05-13 05:59:31.468398] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.240 pt2 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:23.240 05:59:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.500 05:59:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:23.500 "name": "raid_bdev1", 00:07:23.500 "uuid": "f5afae5d-10ed-11ef-ba60-3508ead7bdda", 00:07:23.500 "strip_size_kb": 64, 00:07:23.500 "state": "online", 00:07:23.500 "raid_level": "raid0", 00:07:23.500 "superblock": true, 00:07:23.500 "num_base_bdevs": 2, 00:07:23.500 "num_base_bdevs_discovered": 2, 00:07:23.500 "num_base_bdevs_operational": 2, 00:07:23.500 "base_bdevs_list": [ 00:07:23.500 { 00:07:23.500 "name": "pt1", 00:07:23.500 "uuid": "abcc12ed-74d0-3559-a1c4-fb7cb6ea7690", 00:07:23.500 "is_configured": true, 00:07:23.500 "data_offset": 2048, 00:07:23.500 "data_size": 63488 00:07:23.500 }, 00:07:23.500 { 00:07:23.500 "name": "pt2", 00:07:23.500 "uuid": "aabd24cd-155c-1256-ba22-109a3a01152f", 00:07:23.500 "is_configured": true, 00:07:23.500 "data_offset": 2048, 00:07:23.500 "data_size": 63488 00:07:23.500 } 00:07:23.500 ] 00:07:23.500 }' 00:07:23.500 05:59:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:23.500 05:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:23.760 05:59:31 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:23.760 05:59:31 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:24.031 [2024-05-13 05:59:32.108059] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.031 05:59:32 -- bdev/bdev_raid.sh@430 -- # '[' f5afae5d-10ed-11ef-ba60-3508ead7bdda '!=' f5afae5d-10ed-11ef-ba60-3508ead7bdda ']' 00:07:24.031 05:59:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:07:24.031 05:59:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:24.031 05:59:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:24.031 05:59:32 -- bdev/bdev_raid.sh@511 -- # killprocess 47990 00:07:24.031 05:59:32 -- common/autotest_common.sh@926 -- # '[' -z 47990 ']' 00:07:24.031 05:59:32 -- common/autotest_common.sh@930 -- # kill -0 47990 00:07:24.031 05:59:32 -- common/autotest_common.sh@931 -- # uname 00:07:24.031 05:59:32 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:24.031 05:59:32 -- common/autotest_common.sh@934 -- # tail -1 00:07:24.031 05:59:32 -- common/autotest_common.sh@934 -- # ps -c -o command 47990 00:07:24.031 05:59:32 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:24.031 05:59:32 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:24.031 killing process with pid 47990 00:07:24.031 05:59:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47990' 00:07:24.031 05:59:32 -- common/autotest_common.sh@945 -- # kill 47990 00:07:24.031 [2024-05-13 05:59:32.137974] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.031 [2024-05-13 05:59:32.138021] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.031 [2024-05-13 05:59:32.138037] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.031 [2024-05-13 05:59:32.138042] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b4f1180 name raid_bdev1, state offline 00:07:24.031 05:59:32 -- common/autotest_common.sh@950 -- # wait 47990 00:07:24.031 [2024-05-13 05:59:32.156486] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:24.304 00:07:24.304 real 0m5.472s 00:07:24.304 user 0m9.075s 00:07:24.304 sys 0m1.090s 00:07:24.304 05:59:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.304 05:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.304 ************************************ 00:07:24.304 END TEST raid_superblock_test 00:07:24.304 ************************************ 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:24.304 05:59:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:24.304 05:59:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.304 05:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.304 ************************************ 00:07:24.304 START TEST raid_state_function_test 00:07:24.304 ************************************ 00:07:24.304 05:59:32 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=48135 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48135' 00:07:24.304 Process raid pid: 48135 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:24.304 05:59:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48135 /var/tmp/spdk-raid.sock 00:07:24.304 05:59:32 -- common/autotest_common.sh@819 -- # '[' -z 48135 ']' 00:07:24.304 05:59:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:24.304 05:59:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:24.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:24.304 05:59:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:24.304 05:59:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:24.304 05:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:24.304 [2024-05-13 05:59:32.449046] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:24.304 [2024-05-13 05:59:32.449378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:24.874 EAL: TSC is not safe to use in SMP mode 00:07:24.874 EAL: TSC is not invariant 00:07:24.874 [2024-05-13 05:59:32.873499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.874 [2024-05-13 05:59:32.987278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.874 [2024-05-13 05:59:32.987702] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.874 [2024-05-13 05:59:32.987711] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.134 05:59:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.134 05:59:33 -- common/autotest_common.sh@852 -- # return 0 00:07:25.134 05:59:33 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:25.396 [2024-05-13 05:59:33.505253] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.396 [2024-05-13 05:59:33.505303] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.396 [2024-05-13 05:59:33.505307] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.396 [2024-05-13 05:59:33.505314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:25.396 "name": "Existed_Raid", 00:07:25.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.396 "strip_size_kb": 64, 00:07:25.396 "state": "configuring", 00:07:25.396 "raid_level": "concat", 00:07:25.396 "superblock": false, 00:07:25.396 "num_base_bdevs": 2, 00:07:25.396 "num_base_bdevs_discovered": 0, 00:07:25.396 "num_base_bdevs_operational": 2, 00:07:25.396 "base_bdevs_list": [ 00:07:25.396 { 00:07:25.396 "name": "BaseBdev1", 00:07:25.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.396 "is_configured": false, 00:07:25.396 "data_offset": 0, 00:07:25.396 "data_size": 0 00:07:25.396 }, 00:07:25.396 { 00:07:25.396 "name": "BaseBdev2", 00:07:25.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.396 "is_configured": false, 00:07:25.396 "data_offset": 0, 00:07:25.396 "data_size": 0 00:07:25.396 } 00:07:25.396 ] 00:07:25.396 }' 00:07:25.396 05:59:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:25.396 05:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.964 05:59:33 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:25.964 [2024-05-13 05:59:34.121261] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.964 [2024-05-13 05:59:34.121288] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c827500 name Existed_Raid, state configuring 00:07:25.964 05:59:34 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:26.224 [2024-05-13 05:59:34.305287] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.224 [2024-05-13 05:59:34.305344] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.224 [2024-05-13 05:59:34.305348] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.224 [2024-05-13 05:59:34.305355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.224 05:59:34 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.224 [2024-05-13 05:59:34.490377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.224 BaseBdev1 00:07:26.224 05:59:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:26.224 05:59:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:26.224 05:59:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:26.224 05:59:34 -- common/autotest_common.sh@889 -- # local i 00:07:26.224 05:59:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:26.224 05:59:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:26.224 05:59:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:26.483 05:59:34 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.743 [ 00:07:26.743 { 00:07:26.743 "name": "BaseBdev1", 00:07:26.743 "aliases": [ 00:07:26.743 "f9230fc5-10ed-11ef-ba60-3508ead7bdda" 00:07:26.743 ], 00:07:26.743 "product_name": "Malloc disk", 00:07:26.743 "block_size": 512, 00:07:26.743 "num_blocks": 65536, 00:07:26.743 "uuid": "f9230fc5-10ed-11ef-ba60-3508ead7bdda", 00:07:26.743 "assigned_rate_limits": { 00:07:26.743 "rw_ios_per_sec": 0, 00:07:26.743 "rw_mbytes_per_sec": 0, 00:07:26.743 "r_mbytes_per_sec": 0, 00:07:26.743 "w_mbytes_per_sec": 0 00:07:26.743 }, 00:07:26.743 "claimed": true, 00:07:26.743 "claim_type": "exclusive_write", 00:07:26.743 "zoned": false, 00:07:26.743 "supported_io_types": { 00:07:26.743 "read": true, 00:07:26.743 "write": true, 00:07:26.743 "unmap": true, 00:07:26.743 "write_zeroes": true, 00:07:26.743 "flush": true, 00:07:26.743 "reset": true, 00:07:26.743 "compare": false, 00:07:26.743 "compare_and_write": false, 00:07:26.743 "abort": true, 00:07:26.743 "nvme_admin": false, 00:07:26.743 "nvme_io": false 00:07:26.743 }, 00:07:26.743 "memory_domains": [ 00:07:26.743 { 00:07:26.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.743 "dma_device_type": 2 00:07:26.743 } 00:07:26.743 ], 00:07:26.743 "driver_specific": {} 00:07:26.743 } 00:07:26.743 ] 00:07:26.743 05:59:34 -- common/autotest_common.sh@895 -- # return 0 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:26.743 05:59:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.743 05:59:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:26.743 "name": "Existed_Raid", 00:07:26.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.743 "strip_size_kb": 64, 00:07:26.743 "state": "configuring", 00:07:26.743 "raid_level": "concat", 00:07:26.743 "superblock": false, 00:07:26.743 "num_base_bdevs": 2, 00:07:26.743 "num_base_bdevs_discovered": 1, 00:07:26.743 "num_base_bdevs_operational": 2, 00:07:26.743 "base_bdevs_list": [ 00:07:26.743 { 00:07:26.743 "name": "BaseBdev1", 00:07:26.743 "uuid": "f9230fc5-10ed-11ef-ba60-3508ead7bdda", 00:07:26.743 "is_configured": true, 00:07:26.743 "data_offset": 0, 00:07:26.743 "data_size": 65536 00:07:26.743 }, 00:07:26.743 { 00:07:26.743 "name": "BaseBdev2", 00:07:26.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.743 "is_configured": false, 00:07:26.743 "data_offset": 0, 00:07:26.743 "data_size": 0 00:07:26.743 } 00:07:26.743 ] 00:07:26.743 }' 00:07:26.743 05:59:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:26.743 05:59:35 -- common/autotest_common.sh@10 -- # set +x 00:07:27.313 05:59:35 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:27.313 [2024-05-13 05:59:35.465301] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.313 [2024-05-13 05:59:35.465337] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c827500 name Existed_Raid, state configuring 00:07:27.313 05:59:35 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:27.313 05:59:35 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:27.572 [2024-05-13 05:59:35.649292] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.572 [2024-05-13 05:59:35.650225] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.572 [2024-05-13 05:59:35.650267] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:27.572 "name": "Existed_Raid", 00:07:27.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.572 "strip_size_kb": 64, 00:07:27.572 "state": "configuring", 00:07:27.572 "raid_level": "concat", 00:07:27.572 "superblock": false, 00:07:27.572 "num_base_bdevs": 2, 00:07:27.572 "num_base_bdevs_discovered": 1, 00:07:27.572 "num_base_bdevs_operational": 2, 00:07:27.572 "base_bdevs_list": [ 00:07:27.572 { 00:07:27.572 "name": "BaseBdev1", 00:07:27.572 "uuid": "f9230fc5-10ed-11ef-ba60-3508ead7bdda", 00:07:27.572 "is_configured": true, 00:07:27.572 "data_offset": 0, 00:07:27.572 "data_size": 65536 00:07:27.572 }, 00:07:27.572 { 00:07:27.572 "name": "BaseBdev2", 00:07:27.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.572 "is_configured": false, 00:07:27.572 "data_offset": 0, 00:07:27.572 "data_size": 0 00:07:27.572 } 00:07:27.572 ] 00:07:27.572 }' 00:07:27.572 05:59:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:27.572 05:59:35 -- common/autotest_common.sh@10 -- # set +x 00:07:27.832 05:59:36 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.091 [2024-05-13 05:59:36.293423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.091 [2024-05-13 05:59:36.293443] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c827a00 00:07:28.091 [2024-05-13 05:59:36.293447] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:28.091 [2024-05-13 05:59:36.293465] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c88aec0 00:07:28.091 [2024-05-13 05:59:36.293557] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c827a00 00:07:28.091 [2024-05-13 05:59:36.293560] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c827a00 00:07:28.091 [2024-05-13 05:59:36.293590] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.091 BaseBdev2 00:07:28.091 05:59:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:28.091 05:59:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:28.091 05:59:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:28.091 05:59:36 -- common/autotest_common.sh@889 -- # local i 00:07:28.091 05:59:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:28.091 05:59:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:28.091 05:59:36 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:28.351 05:59:36 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.610 [ 00:07:28.610 { 00:07:28.610 "name": "BaseBdev2", 00:07:28.610 "aliases": [ 00:07:28.611 "fa3654ac-10ed-11ef-ba60-3508ead7bdda" 00:07:28.611 ], 00:07:28.611 "product_name": "Malloc disk", 00:07:28.611 "block_size": 512, 00:07:28.611 "num_blocks": 65536, 00:07:28.611 "uuid": "fa3654ac-10ed-11ef-ba60-3508ead7bdda", 00:07:28.611 "assigned_rate_limits": { 00:07:28.611 "rw_ios_per_sec": 0, 00:07:28.611 "rw_mbytes_per_sec": 0, 00:07:28.611 "r_mbytes_per_sec": 0, 00:07:28.611 "w_mbytes_per_sec": 0 00:07:28.611 }, 00:07:28.611 "claimed": true, 00:07:28.611 "claim_type": "exclusive_write", 00:07:28.611 "zoned": false, 00:07:28.611 "supported_io_types": { 00:07:28.611 "read": true, 00:07:28.611 "write": true, 00:07:28.611 "unmap": true, 00:07:28.611 "write_zeroes": true, 00:07:28.611 "flush": true, 00:07:28.611 "reset": true, 00:07:28.611 "compare": false, 00:07:28.611 "compare_and_write": false, 00:07:28.611 "abort": true, 00:07:28.611 "nvme_admin": false, 00:07:28.611 "nvme_io": false 00:07:28.611 }, 00:07:28.611 "memory_domains": [ 00:07:28.611 { 00:07:28.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.611 "dma_device_type": 2 00:07:28.611 } 00:07:28.611 ], 00:07:28.611 "driver_specific": {} 00:07:28.611 } 00:07:28.611 ] 00:07:28.611 05:59:36 -- common/autotest_common.sh@895 -- # return 0 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:28.611 "name": "Existed_Raid", 00:07:28.611 "uuid": "fa365afb-10ed-11ef-ba60-3508ead7bdda", 00:07:28.611 "strip_size_kb": 64, 00:07:28.611 "state": "online", 00:07:28.611 "raid_level": "concat", 00:07:28.611 "superblock": false, 00:07:28.611 "num_base_bdevs": 2, 00:07:28.611 "num_base_bdevs_discovered": 2, 00:07:28.611 "num_base_bdevs_operational": 2, 00:07:28.611 "base_bdevs_list": [ 00:07:28.611 { 00:07:28.611 "name": "BaseBdev1", 00:07:28.611 "uuid": "f9230fc5-10ed-11ef-ba60-3508ead7bdda", 00:07:28.611 "is_configured": true, 00:07:28.611 "data_offset": 0, 00:07:28.611 "data_size": 65536 00:07:28.611 }, 00:07:28.611 { 00:07:28.611 "name": "BaseBdev2", 00:07:28.611 "uuid": "fa3654ac-10ed-11ef-ba60-3508ead7bdda", 00:07:28.611 "is_configured": true, 00:07:28.611 "data_offset": 0, 00:07:28.611 "data_size": 65536 00:07:28.611 } 00:07:28.611 ] 00:07:28.611 }' 00:07:28.611 05:59:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:28.611 05:59:36 -- common/autotest_common.sh@10 -- # set +x 00:07:29.180 05:59:37 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:29.180 [2024-05-13 05:59:37.357354] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.180 [2024-05-13 05:59:37.357393] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.180 [2024-05-13 05:59:37.357415] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.180 05:59:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:29.180 05:59:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:07:29.180 05:59:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.181 05:59:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.440 05:59:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:29.440 "name": "Existed_Raid", 00:07:29.440 "uuid": "fa365afb-10ed-11ef-ba60-3508ead7bdda", 00:07:29.440 "strip_size_kb": 64, 00:07:29.440 "state": "offline", 00:07:29.440 "raid_level": "concat", 00:07:29.440 "superblock": false, 00:07:29.440 "num_base_bdevs": 2, 00:07:29.440 "num_base_bdevs_discovered": 1, 00:07:29.441 "num_base_bdevs_operational": 1, 00:07:29.441 "base_bdevs_list": [ 00:07:29.441 { 00:07:29.441 "name": null, 00:07:29.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.441 "is_configured": false, 00:07:29.441 "data_offset": 0, 00:07:29.441 "data_size": 65536 00:07:29.441 }, 00:07:29.441 { 00:07:29.441 "name": "BaseBdev2", 00:07:29.441 "uuid": "fa3654ac-10ed-11ef-ba60-3508ead7bdda", 00:07:29.441 "is_configured": true, 00:07:29.441 "data_offset": 0, 00:07:29.441 "data_size": 65536 00:07:29.441 } 00:07:29.441 ] 00:07:29.441 }' 00:07:29.441 05:59:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:29.441 05:59:37 -- common/autotest_common.sh@10 -- # set +x 00:07:29.700 05:59:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:29.700 05:59:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:29.700 05:59:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.700 05:59:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:29.960 [2024-05-13 05:59:38.186938] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:29.960 [2024-05-13 05:59:38.186972] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c827a00 name Existed_Raid, state offline 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.960 05:59:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.219 05:59:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:30.219 05:59:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:30.219 05:59:38 -- bdev/bdev_raid.sh@287 -- # killprocess 48135 00:07:30.220 05:59:38 -- common/autotest_common.sh@926 -- # '[' -z 48135 ']' 00:07:30.220 05:59:38 -- common/autotest_common.sh@930 -- # kill -0 48135 00:07:30.220 05:59:38 -- common/autotest_common.sh@931 -- # uname 00:07:30.220 05:59:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:30.220 05:59:38 -- common/autotest_common.sh@934 -- # ps -c -o command 48135 00:07:30.220 05:59:38 -- common/autotest_common.sh@934 -- # tail -1 00:07:30.220 05:59:38 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:30.220 05:59:38 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:30.220 killing process with pid 48135 00:07:30.220 05:59:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48135' 00:07:30.220 05:59:38 -- common/autotest_common.sh@945 -- # kill 48135 00:07:30.220 [2024-05-13 05:59:38.397014] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.220 [2024-05-13 05:59:38.397071] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.220 05:59:38 -- common/autotest_common.sh@950 -- # wait 48135 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:30.479 00:07:30.479 real 0m6.189s 00:07:30.479 user 0m10.361s 00:07:30.479 sys 0m1.260s 00:07:30.479 05:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.479 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.479 ************************************ 00:07:30.479 END TEST raid_state_function_test 00:07:30.479 ************************************ 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:30.479 05:59:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:30.479 05:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.479 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.479 ************************************ 00:07:30.479 START TEST raid_state_function_test_sb 00:07:30.479 ************************************ 00:07:30.479 05:59:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=48331 00:07:30.479 Process raid pid: 48331 00:07:30.479 05:59:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48331' 00:07:30.480 05:59:38 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:30.480 05:59:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48331 /var/tmp/spdk-raid.sock 00:07:30.480 05:59:38 -- common/autotest_common.sh@819 -- # '[' -z 48331 ']' 00:07:30.480 05:59:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:30.480 05:59:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:30.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:30.480 05:59:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:30.480 05:59:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:30.480 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:30.480 [2024-05-13 05:59:38.695106] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:30.480 [2024-05-13 05:59:38.695444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:31.049 EAL: TSC is not safe to use in SMP mode 00:07:31.049 EAL: TSC is not invariant 00:07:31.049 [2024-05-13 05:59:39.120670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.049 [2024-05-13 05:59:39.235787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.049 [2024-05-13 05:59:39.236217] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.049 [2024-05-13 05:59:39.236226] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.310 05:59:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:31.310 05:59:39 -- common/autotest_common.sh@852 -- # return 0 00:07:31.310 05:59:39 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:31.569 [2024-05-13 05:59:39.725894] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.569 [2024-05-13 05:59:39.725948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.569 [2024-05-13 05:59:39.725952] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.570 [2024-05-13 05:59:39.725958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:31.570 05:59:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.829 05:59:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:31.829 "name": "Existed_Raid", 00:07:31.829 "uuid": "fc421a6d-10ed-11ef-ba60-3508ead7bdda", 00:07:31.829 "strip_size_kb": 64, 00:07:31.829 "state": "configuring", 00:07:31.829 "raid_level": "concat", 00:07:31.829 "superblock": true, 00:07:31.829 "num_base_bdevs": 2, 00:07:31.829 "num_base_bdevs_discovered": 0, 00:07:31.829 "num_base_bdevs_operational": 2, 00:07:31.829 "base_bdevs_list": [ 00:07:31.829 { 00:07:31.829 "name": "BaseBdev1", 00:07:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.829 "is_configured": false, 00:07:31.829 "data_offset": 0, 00:07:31.829 "data_size": 0 00:07:31.829 }, 00:07:31.829 { 00:07:31.829 "name": "BaseBdev2", 00:07:31.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.829 "is_configured": false, 00:07:31.829 "data_offset": 0, 00:07:31.829 "data_size": 0 00:07:31.829 } 00:07:31.829 ] 00:07:31.829 }' 00:07:31.829 05:59:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:31.829 05:59:39 -- common/autotest_common.sh@10 -- # set +x 00:07:32.088 05:59:40 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:32.347 [2024-05-13 05:59:40.409877] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:32.347 [2024-05-13 05:59:40.409905] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ec4d500 name Existed_Raid, state configuring 00:07:32.347 05:59:40 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:32.347 [2024-05-13 05:59:40.585914] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:32.347 [2024-05-13 05:59:40.585984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:32.347 [2024-05-13 05:59:40.585988] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:32.347 [2024-05-13 05:59:40.585994] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:32.347 05:59:40 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:32.607 [2024-05-13 05:59:40.755035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.607 BaseBdev1 00:07:32.607 05:59:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:32.607 05:59:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:32.607 05:59:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:32.607 05:59:40 -- common/autotest_common.sh@889 -- # local i 00:07:32.607 05:59:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:32.607 05:59:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:32.607 05:59:40 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:32.867 05:59:40 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:32.867 [ 00:07:32.867 { 00:07:32.867 "name": "BaseBdev1", 00:07:32.867 "aliases": [ 00:07:32.867 "fcdef728-10ed-11ef-ba60-3508ead7bdda" 00:07:32.867 ], 00:07:32.867 "product_name": "Malloc disk", 00:07:32.867 "block_size": 512, 00:07:32.867 "num_blocks": 65536, 00:07:32.867 "uuid": "fcdef728-10ed-11ef-ba60-3508ead7bdda", 00:07:32.867 "assigned_rate_limits": { 00:07:32.867 "rw_ios_per_sec": 0, 00:07:32.867 "rw_mbytes_per_sec": 0, 00:07:32.867 "r_mbytes_per_sec": 0, 00:07:32.867 "w_mbytes_per_sec": 0 00:07:32.867 }, 00:07:32.867 "claimed": true, 00:07:32.867 "claim_type": "exclusive_write", 00:07:32.867 "zoned": false, 00:07:32.867 "supported_io_types": { 00:07:32.867 "read": true, 00:07:32.867 "write": true, 00:07:32.867 "unmap": true, 00:07:32.867 "write_zeroes": true, 00:07:32.867 "flush": true, 00:07:32.867 "reset": true, 00:07:32.867 "compare": false, 00:07:32.867 "compare_and_write": false, 00:07:32.867 "abort": true, 00:07:32.867 "nvme_admin": false, 00:07:32.867 "nvme_io": false 00:07:32.867 }, 00:07:32.867 "memory_domains": [ 00:07:32.867 { 00:07:32.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.867 "dma_device_type": 2 00:07:32.867 } 00:07:32.867 ], 00:07:32.867 "driver_specific": {} 00:07:32.867 } 00:07:32.867 ] 00:07:32.867 05:59:41 -- common/autotest_common.sh@895 -- # return 0 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.867 05:59:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.126 05:59:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:33.126 "name": "Existed_Raid", 00:07:33.126 "uuid": "fcc554f0-10ed-11ef-ba60-3508ead7bdda", 00:07:33.126 "strip_size_kb": 64, 00:07:33.126 "state": "configuring", 00:07:33.126 "raid_level": "concat", 00:07:33.126 "superblock": true, 00:07:33.126 "num_base_bdevs": 2, 00:07:33.126 "num_base_bdevs_discovered": 1, 00:07:33.126 "num_base_bdevs_operational": 2, 00:07:33.126 "base_bdevs_list": [ 00:07:33.126 { 00:07:33.126 "name": "BaseBdev1", 00:07:33.126 "uuid": "fcdef728-10ed-11ef-ba60-3508ead7bdda", 00:07:33.126 "is_configured": true, 00:07:33.126 "data_offset": 2048, 00:07:33.126 "data_size": 63488 00:07:33.126 }, 00:07:33.126 { 00:07:33.126 "name": "BaseBdev2", 00:07:33.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.126 "is_configured": false, 00:07:33.126 "data_offset": 0, 00:07:33.126 "data_size": 0 00:07:33.126 } 00:07:33.126 ] 00:07:33.126 }' 00:07:33.126 05:59:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:33.126 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:33.401 05:59:41 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:33.665 [2024-05-13 05:59:41.737951] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:33.666 [2024-05-13 05:59:41.737993] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ec4d500 name Existed_Raid, state configuring 00:07:33.666 05:59:41 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:33.666 05:59:41 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:33.666 05:59:41 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:33.924 BaseBdev1 00:07:33.924 05:59:42 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:33.924 05:59:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:33.924 05:59:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:33.924 05:59:42 -- common/autotest_common.sh@889 -- # local i 00:07:33.924 05:59:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:33.924 05:59:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:33.925 05:59:42 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:34.184 05:59:42 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:34.184 [ 00:07:34.184 { 00:07:34.184 "name": "BaseBdev1", 00:07:34.184 "aliases": [ 00:07:34.184 "fdad6bf3-10ed-11ef-ba60-3508ead7bdda" 00:07:34.184 ], 00:07:34.184 "product_name": "Malloc disk", 00:07:34.184 "block_size": 512, 00:07:34.184 "num_blocks": 65536, 00:07:34.184 "uuid": "fdad6bf3-10ed-11ef-ba60-3508ead7bdda", 00:07:34.184 "assigned_rate_limits": { 00:07:34.184 "rw_ios_per_sec": 0, 00:07:34.184 "rw_mbytes_per_sec": 0, 00:07:34.184 "r_mbytes_per_sec": 0, 00:07:34.184 "w_mbytes_per_sec": 0 00:07:34.184 }, 00:07:34.184 "claimed": false, 00:07:34.184 "zoned": false, 00:07:34.184 "supported_io_types": { 00:07:34.184 "read": true, 00:07:34.184 "write": true, 00:07:34.184 "unmap": true, 00:07:34.184 "write_zeroes": true, 00:07:34.184 "flush": true, 00:07:34.184 "reset": true, 00:07:34.184 "compare": false, 00:07:34.184 "compare_and_write": false, 00:07:34.184 "abort": true, 00:07:34.184 "nvme_admin": false, 00:07:34.184 "nvme_io": false 00:07:34.184 }, 00:07:34.184 "memory_domains": [ 00:07:34.184 { 00:07:34.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.184 "dma_device_type": 2 00:07:34.184 } 00:07:34.184 ], 00:07:34.184 "driver_specific": {} 00:07:34.184 } 00:07:34.184 ] 00:07:34.184 05:59:42 -- common/autotest_common.sh@895 -- # return 0 00:07:34.184 05:59:42 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:34.443 [2024-05-13 05:59:42.626989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:34.443 [2024-05-13 05:59:42.627724] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.443 [2024-05-13 05:59:42.627771] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.443 05:59:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.702 05:59:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:34.702 "name": "Existed_Raid", 00:07:34.702 "uuid": "fdfcc63c-10ed-11ef-ba60-3508ead7bdda", 00:07:34.702 "strip_size_kb": 64, 00:07:34.702 "state": "configuring", 00:07:34.702 "raid_level": "concat", 00:07:34.702 "superblock": true, 00:07:34.702 "num_base_bdevs": 2, 00:07:34.702 "num_base_bdevs_discovered": 1, 00:07:34.702 "num_base_bdevs_operational": 2, 00:07:34.702 "base_bdevs_list": [ 00:07:34.702 { 00:07:34.702 "name": "BaseBdev1", 00:07:34.702 "uuid": "fdad6bf3-10ed-11ef-ba60-3508ead7bdda", 00:07:34.702 "is_configured": true, 00:07:34.702 "data_offset": 2048, 00:07:34.702 "data_size": 63488 00:07:34.702 }, 00:07:34.702 { 00:07:34.702 "name": "BaseBdev2", 00:07:34.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.702 "is_configured": false, 00:07:34.702 "data_offset": 0, 00:07:34.702 "data_size": 0 00:07:34.702 } 00:07:34.702 ] 00:07:34.702 }' 00:07:34.702 05:59:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:34.702 05:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.961 05:59:43 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:35.220 [2024-05-13 05:59:43.267108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.220 [2024-05-13 05:59:43.267175] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82ec4da00 00:07:35.220 [2024-05-13 05:59:43.267179] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.220 [2024-05-13 05:59:43.267195] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ecb0ec0 00:07:35.220 [2024-05-13 05:59:43.267226] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82ec4da00 00:07:35.220 [2024-05-13 05:59:43.267229] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82ec4da00 00:07:35.220 [2024-05-13 05:59:43.267243] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.220 BaseBdev2 00:07:35.220 05:59:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:35.220 05:59:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:35.220 05:59:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:35.220 05:59:43 -- common/autotest_common.sh@889 -- # local i 00:07:35.220 05:59:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:35.220 05:59:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:35.220 05:59:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:35.220 05:59:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:35.479 [ 00:07:35.479 { 00:07:35.479 "name": "BaseBdev2", 00:07:35.479 "aliases": [ 00:07:35.479 "fe5e6e00-10ed-11ef-ba60-3508ead7bdda" 00:07:35.479 ], 00:07:35.479 "product_name": "Malloc disk", 00:07:35.479 "block_size": 512, 00:07:35.479 "num_blocks": 65536, 00:07:35.479 "uuid": "fe5e6e00-10ed-11ef-ba60-3508ead7bdda", 00:07:35.479 "assigned_rate_limits": { 00:07:35.479 "rw_ios_per_sec": 0, 00:07:35.479 "rw_mbytes_per_sec": 0, 00:07:35.479 "r_mbytes_per_sec": 0, 00:07:35.479 "w_mbytes_per_sec": 0 00:07:35.479 }, 00:07:35.479 "claimed": true, 00:07:35.479 "claim_type": "exclusive_write", 00:07:35.479 "zoned": false, 00:07:35.479 "supported_io_types": { 00:07:35.479 "read": true, 00:07:35.479 "write": true, 00:07:35.479 "unmap": true, 00:07:35.479 "write_zeroes": true, 00:07:35.479 "flush": true, 00:07:35.479 "reset": true, 00:07:35.479 "compare": false, 00:07:35.479 "compare_and_write": false, 00:07:35.479 "abort": true, 00:07:35.479 "nvme_admin": false, 00:07:35.479 "nvme_io": false 00:07:35.479 }, 00:07:35.479 "memory_domains": [ 00:07:35.479 { 00:07:35.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.479 "dma_device_type": 2 00:07:35.479 } 00:07:35.479 ], 00:07:35.479 "driver_specific": {} 00:07:35.479 } 00:07:35.479 ] 00:07:35.479 05:59:43 -- common/autotest_common.sh@895 -- # return 0 00:07:35.479 05:59:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:35.479 05:59:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.480 05:59:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.738 05:59:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:35.738 "name": "Existed_Raid", 00:07:35.738 "uuid": "fdfcc63c-10ed-11ef-ba60-3508ead7bdda", 00:07:35.738 "strip_size_kb": 64, 00:07:35.738 "state": "online", 00:07:35.738 "raid_level": "concat", 00:07:35.738 "superblock": true, 00:07:35.738 "num_base_bdevs": 2, 00:07:35.738 "num_base_bdevs_discovered": 2, 00:07:35.738 "num_base_bdevs_operational": 2, 00:07:35.738 "base_bdevs_list": [ 00:07:35.738 { 00:07:35.738 "name": "BaseBdev1", 00:07:35.738 "uuid": "fdad6bf3-10ed-11ef-ba60-3508ead7bdda", 00:07:35.738 "is_configured": true, 00:07:35.738 "data_offset": 2048, 00:07:35.738 "data_size": 63488 00:07:35.738 }, 00:07:35.738 { 00:07:35.738 "name": "BaseBdev2", 00:07:35.738 "uuid": "fe5e6e00-10ed-11ef-ba60-3508ead7bdda", 00:07:35.738 "is_configured": true, 00:07:35.738 "data_offset": 2048, 00:07:35.738 "data_size": 63488 00:07:35.738 } 00:07:35.738 ] 00:07:35.738 }' 00:07:35.738 05:59:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:35.738 05:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:35.997 05:59:44 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:36.257 [2024-05-13 05:59:44.306960] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:36.257 [2024-05-13 05:59:44.306983] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:36.257 [2024-05-13 05:59:44.306994] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.257 05:59:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:36.257 "name": "Existed_Raid", 00:07:36.257 "uuid": "fdfcc63c-10ed-11ef-ba60-3508ead7bdda", 00:07:36.258 "strip_size_kb": 64, 00:07:36.258 "state": "offline", 00:07:36.258 "raid_level": "concat", 00:07:36.258 "superblock": true, 00:07:36.258 "num_base_bdevs": 2, 00:07:36.258 "num_base_bdevs_discovered": 1, 00:07:36.258 "num_base_bdevs_operational": 1, 00:07:36.258 "base_bdevs_list": [ 00:07:36.258 { 00:07:36.258 "name": null, 00:07:36.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.258 "is_configured": false, 00:07:36.258 "data_offset": 2048, 00:07:36.258 "data_size": 63488 00:07:36.258 }, 00:07:36.258 { 00:07:36.258 "name": "BaseBdev2", 00:07:36.258 "uuid": "fe5e6e00-10ed-11ef-ba60-3508ead7bdda", 00:07:36.258 "is_configured": true, 00:07:36.258 "data_offset": 2048, 00:07:36.258 "data_size": 63488 00:07:36.258 } 00:07:36.258 ] 00:07:36.258 }' 00:07:36.258 05:59:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:36.258 05:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.518 05:59:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:36.518 05:59:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:36.518 05:59:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:36.518 05:59:44 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.778 05:59:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:36.778 05:59:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:36.778 05:59:44 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:37.038 [2024-05-13 05:59:45.188208] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:37.038 [2024-05-13 05:59:45.188254] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82ec4da00 name Existed_Raid, state offline 00:07:37.038 05:59:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:37.038 05:59:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:37.038 05:59:45 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.038 05:59:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:37.298 05:59:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:37.298 05:59:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:37.298 05:59:45 -- bdev/bdev_raid.sh@287 -- # killprocess 48331 00:07:37.298 05:59:45 -- common/autotest_common.sh@926 -- # '[' -z 48331 ']' 00:07:37.298 05:59:45 -- common/autotest_common.sh@930 -- # kill -0 48331 00:07:37.298 05:59:45 -- common/autotest_common.sh@931 -- # uname 00:07:37.298 05:59:45 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:37.298 05:59:45 -- common/autotest_common.sh@934 -- # ps -c -o command 48331 00:07:37.298 05:59:45 -- common/autotest_common.sh@934 -- # tail -1 00:07:37.298 05:59:45 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:37.298 05:59:45 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:37.298 killing process with pid 48331 00:07:37.298 05:59:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48331' 00:07:37.298 05:59:45 -- common/autotest_common.sh@945 -- # kill 48331 00:07:37.299 [2024-05-13 05:59:45.410552] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.299 [2024-05-13 05:59:45.410607] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:37.299 05:59:45 -- common/autotest_common.sh@950 -- # wait 48331 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:37.559 00:07:37.559 real 0m6.951s 00:07:37.559 user 0m11.849s 00:07:37.559 sys 0m1.247s 00:07:37.559 05:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.559 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 END TEST raid_state_function_test_sb 00:07:37.559 ************************************ 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:37.559 05:59:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:37.559 05:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.559 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 START TEST raid_superblock_test 00:07:37.559 ************************************ 00:07:37.559 05:59:45 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=48530 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:37.559 05:59:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 48530 /var/tmp/spdk-raid.sock 00:07:37.559 05:59:45 -- common/autotest_common.sh@819 -- # '[' -z 48530 ']' 00:07:37.559 05:59:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:37.559 05:59:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:37.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:37.559 05:59:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:37.559 05:59:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:37.559 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 [2024-05-13 05:59:45.695528] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:37.559 [2024-05-13 05:59:45.695890] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:38.128 EAL: TSC is not safe to use in SMP mode 00:07:38.128 EAL: TSC is not invariant 00:07:38.128 [2024-05-13 05:59:46.128479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.128 [2024-05-13 05:59:46.242383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.128 [2024-05-13 05:59:46.242765] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.128 [2024-05-13 05:59:46.242774] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.387 05:59:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:38.387 05:59:46 -- common/autotest_common.sh@852 -- # return 0 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:38.387 05:59:46 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:38.647 malloc1 00:07:38.647 05:59:46 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:38.907 [2024-05-13 05:59:46.956295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:38.907 [2024-05-13 05:59:46.956363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.907 [2024-05-13 05:59:46.956931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c69b780 00:07:38.907 [2024-05-13 05:59:46.956957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.907 [2024-05-13 05:59:46.957987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.907 [2024-05-13 05:59:46.958015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:38.907 pt1 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:38.907 05:59:46 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:38.907 malloc2 00:07:38.907 05:59:47 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:39.166 [2024-05-13 05:59:47.312273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:39.166 [2024-05-13 05:59:47.312316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.166 [2024-05-13 05:59:47.312345] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c69bc80 00:07:39.166 [2024-05-13 05:59:47.312352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.166 [2024-05-13 05:59:47.312751] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.166 [2024-05-13 05:59:47.312775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:39.166 pt2 00:07:39.166 05:59:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:39.166 05:59:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:39.166 05:59:47 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:39.426 [2024-05-13 05:59:47.496298] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:39.426 [2024-05-13 05:59:47.496969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:39.426 [2024-05-13 05:59:47.497039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c69bf00 00:07:39.426 [2024-05-13 05:59:47.497045] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.426 [2024-05-13 05:59:47.497078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c6fee20 00:07:39.426 [2024-05-13 05:59:47.497155] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c69bf00 00:07:39.426 [2024-05-13 05:59:47.497161] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c69bf00 00:07:39.426 [2024-05-13 05:59:47.497180] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:39.426 "name": "raid_bdev1", 00:07:39.426 "uuid": "00e3c5dc-10ee-11ef-ba60-3508ead7bdda", 00:07:39.426 "strip_size_kb": 64, 00:07:39.426 "state": "online", 00:07:39.426 "raid_level": "concat", 00:07:39.426 "superblock": true, 00:07:39.426 "num_base_bdevs": 2, 00:07:39.426 "num_base_bdevs_discovered": 2, 00:07:39.426 "num_base_bdevs_operational": 2, 00:07:39.426 "base_bdevs_list": [ 00:07:39.426 { 00:07:39.426 "name": "pt1", 00:07:39.426 "uuid": "09055e66-fb2f-be53-af14-31764e34c5ae", 00:07:39.426 "is_configured": true, 00:07:39.426 "data_offset": 2048, 00:07:39.426 "data_size": 63488 00:07:39.426 }, 00:07:39.426 { 00:07:39.426 "name": "pt2", 00:07:39.426 "uuid": "14dfc3bf-1b5b-2c5b-8e4e-95bf23aaa1be", 00:07:39.426 "is_configured": true, 00:07:39.426 "data_offset": 2048, 00:07:39.426 "data_size": 63488 00:07:39.426 } 00:07:39.426 ] 00:07:39.426 }' 00:07:39.426 05:59:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:39.426 05:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:39.994 05:59:47 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:39.994 05:59:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:39.994 [2024-05-13 05:59:48.176346] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.994 05:59:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=00e3c5dc-10ee-11ef-ba60-3508ead7bdda 00:07:39.994 05:59:48 -- bdev/bdev_raid.sh@380 -- # '[' -z 00e3c5dc-10ee-11ef-ba60-3508ead7bdda ']' 00:07:39.994 05:59:48 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:40.254 [2024-05-13 05:59:48.360271] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:40.254 [2024-05-13 05:59:48.360283] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:40.254 [2024-05-13 05:59:48.360300] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:40.254 [2024-05-13 05:59:48.360312] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:40.254 [2024-05-13 05:59:48.360316] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c69bf00 name raid_bdev1, state offline 00:07:40.254 05:59:48 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:40.254 05:59:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:40.513 05:59:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:40.513 05:59:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:40.513 05:59:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:40.513 05:59:48 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:40.513 05:59:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:40.513 05:59:48 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:40.773 05:59:48 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:40.773 05:59:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:41.031 05:59:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:41.031 05:59:49 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:41.031 05:59:49 -- common/autotest_common.sh@640 -- # local es=0 00:07:41.031 05:59:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:41.031 05:59:49 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.031 05:59:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:41.031 05:59:49 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.031 05:59:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:41.031 05:59:49 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.031 05:59:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:41.031 05:59:49 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:41.031 05:59:49 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:41.032 05:59:49 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:41.032 [2024-05-13 05:59:49.320300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:41.032 [2024-05-13 05:59:49.321018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:41.032 [2024-05-13 05:59:49.321042] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:41.032 [2024-05-13 05:59:49.321082] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:41.032 [2024-05-13 05:59:49.321090] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.032 [2024-05-13 05:59:49.321093] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c69bc80 name raid_bdev1, state configuring 00:07:41.290 request: 00:07:41.290 { 00:07:41.290 "name": "raid_bdev1", 00:07:41.290 "raid_level": "concat", 00:07:41.290 "base_bdevs": [ 00:07:41.290 "malloc1", 00:07:41.290 "malloc2" 00:07:41.290 ], 00:07:41.290 "superblock": false, 00:07:41.290 "strip_size_kb": 64, 00:07:41.290 "method": "bdev_raid_create", 00:07:41.290 "req_id": 1 00:07:41.290 } 00:07:41.290 Got JSON-RPC error response 00:07:41.290 response: 00:07:41.290 { 00:07:41.290 "code": -17, 00:07:41.290 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:41.290 } 00:07:41.290 05:59:49 -- common/autotest_common.sh@643 -- # es=1 00:07:41.290 05:59:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:41.290 05:59:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:41.290 05:59:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:41.290 05:59:49 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.290 05:59:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:41.290 05:59:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:41.290 05:59:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:41.290 05:59:49 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.549 [2024-05-13 05:59:49.708304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.549 [2024-05-13 05:59:49.708354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.549 [2024-05-13 05:59:49.708388] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c69b780 00:07:41.549 [2024-05-13 05:59:49.708393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.549 [2024-05-13 05:59:49.709170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.549 [2024-05-13 05:59:49.709193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.549 [2024-05-13 05:59:49.709213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:07:41.549 [2024-05-13 05:59:49.709222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:41.549 pt1 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.549 05:59:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.809 05:59:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:41.809 "name": "raid_bdev1", 00:07:41.809 "uuid": "00e3c5dc-10ee-11ef-ba60-3508ead7bdda", 00:07:41.809 "strip_size_kb": 64, 00:07:41.809 "state": "configuring", 00:07:41.809 "raid_level": "concat", 00:07:41.809 "superblock": true, 00:07:41.809 "num_base_bdevs": 2, 00:07:41.809 "num_base_bdevs_discovered": 1, 00:07:41.809 "num_base_bdevs_operational": 2, 00:07:41.809 "base_bdevs_list": [ 00:07:41.809 { 00:07:41.809 "name": "pt1", 00:07:41.809 "uuid": "09055e66-fb2f-be53-af14-31764e34c5ae", 00:07:41.809 "is_configured": true, 00:07:41.809 "data_offset": 2048, 00:07:41.809 "data_size": 63488 00:07:41.809 }, 00:07:41.809 { 00:07:41.809 "name": null, 00:07:41.809 "uuid": "14dfc3bf-1b5b-2c5b-8e4e-95bf23aaa1be", 00:07:41.809 "is_configured": false, 00:07:41.809 "data_offset": 2048, 00:07:41.809 "data_size": 63488 00:07:41.809 } 00:07:41.809 ] 00:07:41.809 }' 00:07:41.809 05:59:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:41.809 05:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:42.069 05:59:50 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:07:42.069 05:59:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:07:42.069 05:59:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:42.069 05:59:50 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:42.329 [2024-05-13 05:59:50.368339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:42.329 [2024-05-13 05:59:50.368399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.329 [2024-05-13 05:59:50.368433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c69bf00 00:07:42.329 [2024-05-13 05:59:50.368440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.329 [2024-05-13 05:59:50.368565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.329 [2024-05-13 05:59:50.368572] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:42.329 [2024-05-13 05:59:50.368594] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:07:42.329 [2024-05-13 05:59:50.368601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:42.329 [2024-05-13 05:59:50.368625] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c69c180 00:07:42.329 [2024-05-13 05:59:50.368628] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.329 [2024-05-13 05:59:50.368643] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c6fee20 00:07:42.329 [2024-05-13 05:59:50.368688] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c69c180 00:07:42.329 [2024-05-13 05:59:50.368691] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c69c180 00:07:42.329 [2024-05-13 05:59:50.368718] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.329 pt2 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.329 05:59:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:42.329 "name": "raid_bdev1", 00:07:42.329 "uuid": "00e3c5dc-10ee-11ef-ba60-3508ead7bdda", 00:07:42.329 "strip_size_kb": 64, 00:07:42.329 "state": "online", 00:07:42.329 "raid_level": "concat", 00:07:42.329 "superblock": true, 00:07:42.329 "num_base_bdevs": 2, 00:07:42.329 "num_base_bdevs_discovered": 2, 00:07:42.329 "num_base_bdevs_operational": 2, 00:07:42.330 "base_bdevs_list": [ 00:07:42.330 { 00:07:42.330 "name": "pt1", 00:07:42.330 "uuid": "09055e66-fb2f-be53-af14-31764e34c5ae", 00:07:42.330 "is_configured": true, 00:07:42.330 "data_offset": 2048, 00:07:42.330 "data_size": 63488 00:07:42.330 }, 00:07:42.330 { 00:07:42.330 "name": "pt2", 00:07:42.330 "uuid": "14dfc3bf-1b5b-2c5b-8e4e-95bf23aaa1be", 00:07:42.330 "is_configured": true, 00:07:42.330 "data_offset": 2048, 00:07:42.330 "data_size": 63488 00:07:42.330 } 00:07:42.330 ] 00:07:42.330 }' 00:07:42.330 05:59:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:42.330 05:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:42.590 05:59:50 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:42.590 05:59:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:07:42.849 [2024-05-13 05:59:51.020347] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.849 05:59:51 -- bdev/bdev_raid.sh@430 -- # '[' 00e3c5dc-10ee-11ef-ba60-3508ead7bdda '!=' 00e3c5dc-10ee-11ef-ba60-3508ead7bdda ']' 00:07:42.849 05:59:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:07:42.849 05:59:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:42.849 05:59:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:07:42.849 05:59:51 -- bdev/bdev_raid.sh@511 -- # killprocess 48530 00:07:42.849 05:59:51 -- common/autotest_common.sh@926 -- # '[' -z 48530 ']' 00:07:42.849 05:59:51 -- common/autotest_common.sh@930 -- # kill -0 48530 00:07:42.849 05:59:51 -- common/autotest_common.sh@931 -- # uname 00:07:42.849 05:59:51 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:42.849 05:59:51 -- common/autotest_common.sh@934 -- # ps -c -o command 48530 00:07:42.849 05:59:51 -- common/autotest_common.sh@934 -- # tail -1 00:07:42.849 05:59:51 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:42.849 05:59:51 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:42.849 killing process with pid 48530 00:07:42.849 05:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48530' 00:07:42.849 05:59:51 -- common/autotest_common.sh@945 -- # kill 48530 00:07:42.849 [2024-05-13 05:59:51.053729] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.849 [2024-05-13 05:59:51.053758] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.849 [2024-05-13 05:59:51.053771] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.849 [2024-05-13 05:59:51.053775] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c69c180 name raid_bdev1, state offline 00:07:42.849 05:59:51 -- common/autotest_common.sh@950 -- # wait 48530 00:07:42.849 [2024-05-13 05:59:51.071980] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@513 -- # return 0 00:07:43.136 00:07:43.136 real 0m5.609s 00:07:43.136 user 0m9.273s 00:07:43.136 sys 0m1.189s 00:07:43.136 05:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.136 05:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 ************************************ 00:07:43.136 END TEST raid_superblock_test 00:07:43.136 ************************************ 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.136 05:59:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:43.136 05:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.136 05:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 ************************************ 00:07:43.136 START TEST raid_state_function_test 00:07:43.136 ************************************ 00:07:43.136 05:59:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=48675 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48675' 00:07:43.136 Process raid pid: 48675 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:43.136 05:59:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48675 /var/tmp/spdk-raid.sock 00:07:43.136 05:59:51 -- common/autotest_common.sh@819 -- # '[' -z 48675 ']' 00:07:43.136 05:59:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:43.136 05:59:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:43.136 05:59:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:43.136 05:59:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.136 05:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:43.136 [2024-05-13 05:59:51.357909] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.136 [2024-05-13 05:59:51.358275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:44.094 EAL: TSC is not safe to use in SMP mode 00:07:44.094 EAL: TSC is not invariant 00:07:44.094 [2024-05-13 05:59:52.079307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.094 [2024-05-13 05:59:52.181014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.094 [2024-05-13 05:59:52.181414] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.094 [2024-05-13 05:59:52.181423] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.094 05:59:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:44.094 05:59:52 -- common/autotest_common.sh@852 -- # return 0 00:07:44.094 05:59:52 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:44.354 [2024-05-13 05:59:52.430875] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.354 [2024-05-13 05:59:52.430929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.354 [2024-05-13 05:59:52.430933] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.354 [2024-05-13 05:59:52.430939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:44.354 "name": "Existed_Raid", 00:07:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.354 "strip_size_kb": 0, 00:07:44.354 "state": "configuring", 00:07:44.354 "raid_level": "raid1", 00:07:44.354 "superblock": false, 00:07:44.354 "num_base_bdevs": 2, 00:07:44.354 "num_base_bdevs_discovered": 0, 00:07:44.354 "num_base_bdevs_operational": 2, 00:07:44.354 "base_bdevs_list": [ 00:07:44.354 { 00:07:44.354 "name": "BaseBdev1", 00:07:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.354 "is_configured": false, 00:07:44.354 "data_offset": 0, 00:07:44.354 "data_size": 0 00:07:44.354 }, 00:07:44.354 { 00:07:44.354 "name": "BaseBdev2", 00:07:44.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.354 "is_configured": false, 00:07:44.354 "data_offset": 0, 00:07:44.354 "data_size": 0 00:07:44.354 } 00:07:44.354 ] 00:07:44.354 }' 00:07:44.354 05:59:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:44.354 05:59:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.922 05:59:52 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:44.922 [2024-05-13 05:59:53.078870] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.922 [2024-05-13 05:59:53.078901] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a335500 name Existed_Raid, state configuring 00:07:44.922 05:59:53 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:45.182 [2024-05-13 05:59:53.258881] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.182 [2024-05-13 05:59:53.258933] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.182 [2024-05-13 05:59:53.258936] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.182 [2024-05-13 05:59:53.258942] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.182 05:59:53 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.182 [2024-05-13 05:59:53.444015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.182 BaseBdev1 00:07:45.182 05:59:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:45.182 05:59:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:45.182 05:59:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:45.182 05:59:53 -- common/autotest_common.sh@889 -- # local i 00:07:45.182 05:59:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:45.182 05:59:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:45.182 05:59:53 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:45.442 05:59:53 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.701 [ 00:07:45.701 { 00:07:45.701 "name": "BaseBdev1", 00:07:45.701 "aliases": [ 00:07:45.701 "046f2802-10ee-11ef-ba60-3508ead7bdda" 00:07:45.701 ], 00:07:45.701 "product_name": "Malloc disk", 00:07:45.701 "block_size": 512, 00:07:45.701 "num_blocks": 65536, 00:07:45.701 "uuid": "046f2802-10ee-11ef-ba60-3508ead7bdda", 00:07:45.701 "assigned_rate_limits": { 00:07:45.701 "rw_ios_per_sec": 0, 00:07:45.701 "rw_mbytes_per_sec": 0, 00:07:45.701 "r_mbytes_per_sec": 0, 00:07:45.701 "w_mbytes_per_sec": 0 00:07:45.701 }, 00:07:45.701 "claimed": true, 00:07:45.701 "claim_type": "exclusive_write", 00:07:45.701 "zoned": false, 00:07:45.701 "supported_io_types": { 00:07:45.701 "read": true, 00:07:45.701 "write": true, 00:07:45.701 "unmap": true, 00:07:45.701 "write_zeroes": true, 00:07:45.701 "flush": true, 00:07:45.701 "reset": true, 00:07:45.701 "compare": false, 00:07:45.701 "compare_and_write": false, 00:07:45.701 "abort": true, 00:07:45.701 "nvme_admin": false, 00:07:45.701 "nvme_io": false 00:07:45.701 }, 00:07:45.701 "memory_domains": [ 00:07:45.701 { 00:07:45.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.701 "dma_device_type": 2 00:07:45.701 } 00:07:45.701 ], 00:07:45.701 "driver_specific": {} 00:07:45.701 } 00:07:45.701 ] 00:07:45.701 05:59:53 -- common/autotest_common.sh@895 -- # return 0 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:45.701 "name": "Existed_Raid", 00:07:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.701 "strip_size_kb": 0, 00:07:45.701 "state": "configuring", 00:07:45.701 "raid_level": "raid1", 00:07:45.701 "superblock": false, 00:07:45.701 "num_base_bdevs": 2, 00:07:45.701 "num_base_bdevs_discovered": 1, 00:07:45.701 "num_base_bdevs_operational": 2, 00:07:45.701 "base_bdevs_list": [ 00:07:45.701 { 00:07:45.701 "name": "BaseBdev1", 00:07:45.701 "uuid": "046f2802-10ee-11ef-ba60-3508ead7bdda", 00:07:45.701 "is_configured": true, 00:07:45.701 "data_offset": 0, 00:07:45.701 "data_size": 65536 00:07:45.701 }, 00:07:45.701 { 00:07:45.701 "name": "BaseBdev2", 00:07:45.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.701 "is_configured": false, 00:07:45.701 "data_offset": 0, 00:07:45.701 "data_size": 0 00:07:45.701 } 00:07:45.701 ] 00:07:45.701 }' 00:07:45.701 05:59:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:45.701 05:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:45.960 05:59:54 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:46.220 [2024-05-13 05:59:54.414882] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.220 [2024-05-13 05:59:54.414916] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a335500 name Existed_Raid, state configuring 00:07:46.220 05:59:54 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:07:46.220 05:59:54 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:46.479 [2024-05-13 05:59:54.594885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.479 [2024-05-13 05:59:54.595808] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.479 [2024-05-13 05:59:54.595850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.479 05:59:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.738 05:59:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:46.738 "name": "Existed_Raid", 00:07:46.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.738 "strip_size_kb": 0, 00:07:46.738 "state": "configuring", 00:07:46.738 "raid_level": "raid1", 00:07:46.738 "superblock": false, 00:07:46.738 "num_base_bdevs": 2, 00:07:46.738 "num_base_bdevs_discovered": 1, 00:07:46.738 "num_base_bdevs_operational": 2, 00:07:46.738 "base_bdevs_list": [ 00:07:46.738 { 00:07:46.738 "name": "BaseBdev1", 00:07:46.738 "uuid": "046f2802-10ee-11ef-ba60-3508ead7bdda", 00:07:46.738 "is_configured": true, 00:07:46.738 "data_offset": 0, 00:07:46.738 "data_size": 65536 00:07:46.738 }, 00:07:46.738 { 00:07:46.738 "name": "BaseBdev2", 00:07:46.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.738 "is_configured": false, 00:07:46.738 "data_offset": 0, 00:07:46.738 "data_size": 0 00:07:46.738 } 00:07:46.738 ] 00:07:46.738 }' 00:07:46.738 05:59:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:46.738 05:59:54 -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 05:59:55 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.998 [2024-05-13 05:59:55.231038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.998 [2024-05-13 05:59:55.231068] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82a335a00 00:07:46.998 [2024-05-13 05:59:55.231071] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:46.998 [2024-05-13 05:59:55.231088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82a398ec0 00:07:46.998 [2024-05-13 05:59:55.231207] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82a335a00 00:07:46.998 [2024-05-13 05:59:55.231210] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82a335a00 00:07:46.998 [2024-05-13 05:59:55.231238] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.998 BaseBdev2 00:07:46.998 05:59:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:46.998 05:59:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:46.998 05:59:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:46.998 05:59:55 -- common/autotest_common.sh@889 -- # local i 00:07:46.998 05:59:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:46.998 05:59:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:46.998 05:59:55 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:47.259 05:59:55 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.519 [ 00:07:47.519 { 00:07:47.519 "name": "BaseBdev2", 00:07:47.519 "aliases": [ 00:07:47.519 "057ffaf8-10ee-11ef-ba60-3508ead7bdda" 00:07:47.519 ], 00:07:47.519 "product_name": "Malloc disk", 00:07:47.519 "block_size": 512, 00:07:47.519 "num_blocks": 65536, 00:07:47.519 "uuid": "057ffaf8-10ee-11ef-ba60-3508ead7bdda", 00:07:47.519 "assigned_rate_limits": { 00:07:47.519 "rw_ios_per_sec": 0, 00:07:47.519 "rw_mbytes_per_sec": 0, 00:07:47.519 "r_mbytes_per_sec": 0, 00:07:47.519 "w_mbytes_per_sec": 0 00:07:47.519 }, 00:07:47.519 "claimed": true, 00:07:47.519 "claim_type": "exclusive_write", 00:07:47.519 "zoned": false, 00:07:47.519 "supported_io_types": { 00:07:47.519 "read": true, 00:07:47.519 "write": true, 00:07:47.519 "unmap": true, 00:07:47.519 "write_zeroes": true, 00:07:47.519 "flush": true, 00:07:47.519 "reset": true, 00:07:47.519 "compare": false, 00:07:47.519 "compare_and_write": false, 00:07:47.519 "abort": true, 00:07:47.519 "nvme_admin": false, 00:07:47.519 "nvme_io": false 00:07:47.519 }, 00:07:47.519 "memory_domains": [ 00:07:47.519 { 00:07:47.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.519 "dma_device_type": 2 00:07:47.519 } 00:07:47.519 ], 00:07:47.519 "driver_specific": {} 00:07:47.519 } 00:07:47.519 ] 00:07:47.519 05:59:55 -- common/autotest_common.sh@895 -- # return 0 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.519 05:59:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.779 05:59:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:47.779 "name": "Existed_Raid", 00:07:47.779 "uuid": "058001e6-10ee-11ef-ba60-3508ead7bdda", 00:07:47.779 "strip_size_kb": 0, 00:07:47.779 "state": "online", 00:07:47.779 "raid_level": "raid1", 00:07:47.779 "superblock": false, 00:07:47.779 "num_base_bdevs": 2, 00:07:47.779 "num_base_bdevs_discovered": 2, 00:07:47.779 "num_base_bdevs_operational": 2, 00:07:47.779 "base_bdevs_list": [ 00:07:47.779 { 00:07:47.779 "name": "BaseBdev1", 00:07:47.779 "uuid": "046f2802-10ee-11ef-ba60-3508ead7bdda", 00:07:47.779 "is_configured": true, 00:07:47.779 "data_offset": 0, 00:07:47.779 "data_size": 65536 00:07:47.779 }, 00:07:47.779 { 00:07:47.779 "name": "BaseBdev2", 00:07:47.779 "uuid": "057ffaf8-10ee-11ef-ba60-3508ead7bdda", 00:07:47.779 "is_configured": true, 00:07:47.779 "data_offset": 0, 00:07:47.779 "data_size": 65536 00:07:47.779 } 00:07:47.779 ] 00:07:47.779 }' 00:07:47.779 05:59:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:47.779 05:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:48.039 05:59:56 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:48.039 [2024-05-13 05:59:56.322912] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.298 05:59:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:48.298 "name": "Existed_Raid", 00:07:48.298 "uuid": "058001e6-10ee-11ef-ba60-3508ead7bdda", 00:07:48.298 "strip_size_kb": 0, 00:07:48.298 "state": "online", 00:07:48.298 "raid_level": "raid1", 00:07:48.299 "superblock": false, 00:07:48.299 "num_base_bdevs": 2, 00:07:48.299 "num_base_bdevs_discovered": 1, 00:07:48.299 "num_base_bdevs_operational": 1, 00:07:48.299 "base_bdevs_list": [ 00:07:48.299 { 00:07:48.299 "name": null, 00:07:48.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.299 "is_configured": false, 00:07:48.299 "data_offset": 0, 00:07:48.299 "data_size": 65536 00:07:48.299 }, 00:07:48.299 { 00:07:48.299 "name": "BaseBdev2", 00:07:48.299 "uuid": "057ffaf8-10ee-11ef-ba60-3508ead7bdda", 00:07:48.299 "is_configured": true, 00:07:48.299 "data_offset": 0, 00:07:48.299 "data_size": 65536 00:07:48.299 } 00:07:48.299 ] 00:07:48.299 }' 00:07:48.299 05:59:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:48.299 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:48.558 05:59:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:48.558 05:59:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:48.558 05:59:56 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.558 05:59:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:48.818 05:59:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:48.818 05:59:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.818 05:59:56 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:49.078 [2024-05-13 05:59:57.120065] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:49.078 [2024-05-13 05:59:57.120090] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.078 [2024-05-13 05:59:57.120102] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.078 [2024-05-13 05:59:57.129305] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.078 [2024-05-13 05:59:57.129313] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82a335a00 name Existed_Raid, state offline 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:49.078 05:59:57 -- bdev/bdev_raid.sh@287 -- # killprocess 48675 00:07:49.078 05:59:57 -- common/autotest_common.sh@926 -- # '[' -z 48675 ']' 00:07:49.078 05:59:57 -- common/autotest_common.sh@930 -- # kill -0 48675 00:07:49.078 05:59:57 -- common/autotest_common.sh@931 -- # uname 00:07:49.078 05:59:57 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:49.078 05:59:57 -- common/autotest_common.sh@934 -- # ps -c -o command 48675 00:07:49.078 05:59:57 -- common/autotest_common.sh@934 -- # tail -1 00:07:49.078 05:59:57 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:49.078 05:59:57 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:49.078 killing process with pid 48675 00:07:49.078 05:59:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48675' 00:07:49.078 05:59:57 -- common/autotest_common.sh@945 -- # kill 48675 00:07:49.078 [2024-05-13 05:59:57.339200] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.078 [2024-05-13 05:59:57.339250] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.078 05:59:57 -- common/autotest_common.sh@950 -- # wait 48675 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:49.338 00:07:49.338 real 0m6.223s 00:07:49.338 user 0m10.116s 00:07:49.338 sys 0m1.560s 00:07:49.338 05:59:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.338 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.338 ************************************ 00:07:49.338 END TEST raid_state_function_test 00:07:49.338 ************************************ 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:49.338 05:59:57 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:49.338 05:59:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.338 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.338 ************************************ 00:07:49.338 START TEST raid_state_function_test_sb 00:07:49.338 ************************************ 00:07:49.338 05:59:57 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:07:49.338 05:59:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=48871 00:07:49.598 Process raid pid: 48871 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48871' 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:49.598 05:59:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48871 /var/tmp/spdk-raid.sock 00:07:49.598 05:59:57 -- common/autotest_common.sh@819 -- # '[' -z 48871 ']' 00:07:49.598 05:59:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:49.598 05:59:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:49.598 05:59:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:49.598 05:59:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.598 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:07:49.598 [2024-05-13 05:59:57.642492] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:49.598 [2024-05-13 05:59:57.642735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:49.858 EAL: TSC is not safe to use in SMP mode 00:07:49.858 EAL: TSC is not invariant 00:07:49.858 [2024-05-13 05:59:58.070285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.117 [2024-05-13 05:59:58.164241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.117 [2024-05-13 05:59:58.164672] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.118 [2024-05-13 05:59:58.164686] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.377 05:59:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.377 05:59:58 -- common/autotest_common.sh@852 -- # return 0 00:07:50.377 05:59:58 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:50.637 [2024-05-13 05:59:58.695710] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.637 [2024-05-13 05:59:58.695759] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.637 [2024-05-13 05:59:58.695764] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.637 [2024-05-13 05:59:58.695770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.637 05:59:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:50.637 "name": "Existed_Raid", 00:07:50.637 "uuid": "0790aafa-10ee-11ef-ba60-3508ead7bdda", 00:07:50.637 "strip_size_kb": 0, 00:07:50.637 "state": "configuring", 00:07:50.637 "raid_level": "raid1", 00:07:50.637 "superblock": true, 00:07:50.638 "num_base_bdevs": 2, 00:07:50.638 "num_base_bdevs_discovered": 0, 00:07:50.638 "num_base_bdevs_operational": 2, 00:07:50.638 "base_bdevs_list": [ 00:07:50.638 { 00:07:50.638 "name": "BaseBdev1", 00:07:50.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.638 "is_configured": false, 00:07:50.638 "data_offset": 0, 00:07:50.638 "data_size": 0 00:07:50.638 }, 00:07:50.638 { 00:07:50.638 "name": "BaseBdev2", 00:07:50.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.638 "is_configured": false, 00:07:50.638 "data_offset": 0, 00:07:50.638 "data_size": 0 00:07:50.638 } 00:07:50.638 ] 00:07:50.638 }' 00:07:50.638 05:59:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:50.638 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:50.897 05:59:59 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:51.158 [2024-05-13 05:59:59.307685] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.158 [2024-05-13 05:59:59.307702] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b848500 name Existed_Raid, state configuring 00:07:51.158 05:59:59 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:51.418 [2024-05-13 05:59:59.495693] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.418 [2024-05-13 05:59:59.495726] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.418 [2024-05-13 05:59:59.495729] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.418 [2024-05-13 05:59:59.495735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.418 05:59:59 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.418 [2024-05-13 05:59:59.676456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.418 BaseBdev1 00:07:51.418 05:59:59 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:07:51.418 05:59:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:51.418 05:59:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:51.418 05:59:59 -- common/autotest_common.sh@889 -- # local i 00:07:51.418 05:59:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:51.418 05:59:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:51.418 05:59:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:51.678 05:59:59 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.937 [ 00:07:51.937 { 00:07:51.937 "name": "BaseBdev1", 00:07:51.937 "aliases": [ 00:07:51.937 "082633ed-10ee-11ef-ba60-3508ead7bdda" 00:07:51.938 ], 00:07:51.938 "product_name": "Malloc disk", 00:07:51.938 "block_size": 512, 00:07:51.938 "num_blocks": 65536, 00:07:51.938 "uuid": "082633ed-10ee-11ef-ba60-3508ead7bdda", 00:07:51.938 "assigned_rate_limits": { 00:07:51.938 "rw_ios_per_sec": 0, 00:07:51.938 "rw_mbytes_per_sec": 0, 00:07:51.938 "r_mbytes_per_sec": 0, 00:07:51.938 "w_mbytes_per_sec": 0 00:07:51.938 }, 00:07:51.938 "claimed": true, 00:07:51.938 "claim_type": "exclusive_write", 00:07:51.938 "zoned": false, 00:07:51.938 "supported_io_types": { 00:07:51.938 "read": true, 00:07:51.938 "write": true, 00:07:51.938 "unmap": true, 00:07:51.938 "write_zeroes": true, 00:07:51.938 "flush": true, 00:07:51.938 "reset": true, 00:07:51.938 "compare": false, 00:07:51.938 "compare_and_write": false, 00:07:51.938 "abort": true, 00:07:51.938 "nvme_admin": false, 00:07:51.938 "nvme_io": false 00:07:51.938 }, 00:07:51.938 "memory_domains": [ 00:07:51.938 { 00:07:51.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.938 "dma_device_type": 2 00:07:51.938 } 00:07:51.938 ], 00:07:51.938 "driver_specific": {} 00:07:51.938 } 00:07:51.938 ] 00:07:51.938 06:00:00 -- common/autotest_common.sh@895 -- # return 0 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:51.938 "name": "Existed_Raid", 00:07:51.938 "uuid": "080abc76-10ee-11ef-ba60-3508ead7bdda", 00:07:51.938 "strip_size_kb": 0, 00:07:51.938 "state": "configuring", 00:07:51.938 "raid_level": "raid1", 00:07:51.938 "superblock": true, 00:07:51.938 "num_base_bdevs": 2, 00:07:51.938 "num_base_bdevs_discovered": 1, 00:07:51.938 "num_base_bdevs_operational": 2, 00:07:51.938 "base_bdevs_list": [ 00:07:51.938 { 00:07:51.938 "name": "BaseBdev1", 00:07:51.938 "uuid": "082633ed-10ee-11ef-ba60-3508ead7bdda", 00:07:51.938 "is_configured": true, 00:07:51.938 "data_offset": 2048, 00:07:51.938 "data_size": 63488 00:07:51.938 }, 00:07:51.938 { 00:07:51.938 "name": "BaseBdev2", 00:07:51.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.938 "is_configured": false, 00:07:51.938 "data_offset": 0, 00:07:51.938 "data_size": 0 00:07:51.938 } 00:07:51.938 ] 00:07:51.938 }' 00:07:51.938 06:00:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:51.938 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:07:52.199 06:00:00 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:52.460 [2024-05-13 06:00:00.619756] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.460 [2024-05-13 06:00:00.619789] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b848500 name Existed_Raid, state configuring 00:07:52.460 06:00:00 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:07:52.460 06:00:00 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:52.728 06:00:00 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.728 BaseBdev1 00:07:52.728 06:00:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:07:52.728 06:00:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:07:52.728 06:00:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:52.728 06:00:00 -- common/autotest_common.sh@889 -- # local i 00:07:52.728 06:00:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:52.728 06:00:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:52.728 06:00:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:52.996 06:00:01 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:53.256 [ 00:07:53.256 { 00:07:53.256 "name": "BaseBdev1", 00:07:53.256 "aliases": [ 00:07:53.256 "08ec3635-10ee-11ef-ba60-3508ead7bdda" 00:07:53.256 ], 00:07:53.256 "product_name": "Malloc disk", 00:07:53.256 "block_size": 512, 00:07:53.256 "num_blocks": 65536, 00:07:53.256 "uuid": "08ec3635-10ee-11ef-ba60-3508ead7bdda", 00:07:53.256 "assigned_rate_limits": { 00:07:53.256 "rw_ios_per_sec": 0, 00:07:53.256 "rw_mbytes_per_sec": 0, 00:07:53.256 "r_mbytes_per_sec": 0, 00:07:53.256 "w_mbytes_per_sec": 0 00:07:53.256 }, 00:07:53.256 "claimed": false, 00:07:53.256 "zoned": false, 00:07:53.256 "supported_io_types": { 00:07:53.256 "read": true, 00:07:53.256 "write": true, 00:07:53.256 "unmap": true, 00:07:53.256 "write_zeroes": true, 00:07:53.256 "flush": true, 00:07:53.256 "reset": true, 00:07:53.256 "compare": false, 00:07:53.256 "compare_and_write": false, 00:07:53.256 "abort": true, 00:07:53.256 "nvme_admin": false, 00:07:53.256 "nvme_io": false 00:07:53.256 }, 00:07:53.256 "memory_domains": [ 00:07:53.256 { 00:07:53.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.256 "dma_device_type": 2 00:07:53.256 } 00:07:53.256 ], 00:07:53.256 "driver_specific": {} 00:07:53.256 } 00:07:53.256 ] 00:07:53.256 06:00:01 -- common/autotest_common.sh@895 -- # return 0 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:53.256 [2024-05-13 06:00:01.501619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.256 [2024-05-13 06:00:01.502300] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.256 [2024-05-13 06:00:01.502339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.256 06:00:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.516 06:00:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:53.516 "name": "Existed_Raid", 00:07:53.516 "uuid": "093cd0cd-10ee-11ef-ba60-3508ead7bdda", 00:07:53.516 "strip_size_kb": 0, 00:07:53.516 "state": "configuring", 00:07:53.516 "raid_level": "raid1", 00:07:53.516 "superblock": true, 00:07:53.516 "num_base_bdevs": 2, 00:07:53.516 "num_base_bdevs_discovered": 1, 00:07:53.516 "num_base_bdevs_operational": 2, 00:07:53.516 "base_bdevs_list": [ 00:07:53.516 { 00:07:53.516 "name": "BaseBdev1", 00:07:53.516 "uuid": "08ec3635-10ee-11ef-ba60-3508ead7bdda", 00:07:53.516 "is_configured": true, 00:07:53.516 "data_offset": 2048, 00:07:53.516 "data_size": 63488 00:07:53.516 }, 00:07:53.516 { 00:07:53.516 "name": "BaseBdev2", 00:07:53.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.516 "is_configured": false, 00:07:53.516 "data_offset": 0, 00:07:53.516 "data_size": 0 00:07:53.516 } 00:07:53.516 ] 00:07:53.516 }' 00:07:53.516 06:00:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:53.516 06:00:01 -- common/autotest_common.sh@10 -- # set +x 00:07:53.775 06:00:01 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.033 [2024-05-13 06:00:02.118104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.033 [2024-05-13 06:00:02.118161] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b848a00 00:07:54.033 [2024-05-13 06:00:02.118165] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:54.033 [2024-05-13 06:00:02.118181] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b8abec0 00:07:54.033 [2024-05-13 06:00:02.118212] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b848a00 00:07:54.033 [2024-05-13 06:00:02.118215] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b848a00 00:07:54.033 [2024-05-13 06:00:02.118228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.033 BaseBdev2 00:07:54.033 06:00:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:07:54.033 06:00:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:07:54.033 06:00:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:07:54.033 06:00:02 -- common/autotest_common.sh@889 -- # local i 00:07:54.033 06:00:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:07:54.033 06:00:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:07:54.033 06:00:02 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:54.033 06:00:02 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.290 [ 00:07:54.290 { 00:07:54.290 "name": "BaseBdev2", 00:07:54.290 "aliases": [ 00:07:54.290 "099addc0-10ee-11ef-ba60-3508ead7bdda" 00:07:54.290 ], 00:07:54.290 "product_name": "Malloc disk", 00:07:54.290 "block_size": 512, 00:07:54.290 "num_blocks": 65536, 00:07:54.290 "uuid": "099addc0-10ee-11ef-ba60-3508ead7bdda", 00:07:54.290 "assigned_rate_limits": { 00:07:54.290 "rw_ios_per_sec": 0, 00:07:54.290 "rw_mbytes_per_sec": 0, 00:07:54.290 "r_mbytes_per_sec": 0, 00:07:54.290 "w_mbytes_per_sec": 0 00:07:54.290 }, 00:07:54.290 "claimed": true, 00:07:54.290 "claim_type": "exclusive_write", 00:07:54.290 "zoned": false, 00:07:54.290 "supported_io_types": { 00:07:54.290 "read": true, 00:07:54.290 "write": true, 00:07:54.290 "unmap": true, 00:07:54.290 "write_zeroes": true, 00:07:54.290 "flush": true, 00:07:54.290 "reset": true, 00:07:54.290 "compare": false, 00:07:54.290 "compare_and_write": false, 00:07:54.290 "abort": true, 00:07:54.290 "nvme_admin": false, 00:07:54.290 "nvme_io": false 00:07:54.290 }, 00:07:54.290 "memory_domains": [ 00:07:54.290 { 00:07:54.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.290 "dma_device_type": 2 00:07:54.290 } 00:07:54.290 ], 00:07:54.290 "driver_specific": {} 00:07:54.290 } 00:07:54.290 ] 00:07:54.290 06:00:02 -- common/autotest_common.sh@895 -- # return 0 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.290 06:00:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.549 06:00:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:54.549 "name": "Existed_Raid", 00:07:54.549 "uuid": "093cd0cd-10ee-11ef-ba60-3508ead7bdda", 00:07:54.549 "strip_size_kb": 0, 00:07:54.549 "state": "online", 00:07:54.549 "raid_level": "raid1", 00:07:54.549 "superblock": true, 00:07:54.549 "num_base_bdevs": 2, 00:07:54.549 "num_base_bdevs_discovered": 2, 00:07:54.549 "num_base_bdevs_operational": 2, 00:07:54.549 "base_bdevs_list": [ 00:07:54.549 { 00:07:54.549 "name": "BaseBdev1", 00:07:54.549 "uuid": "08ec3635-10ee-11ef-ba60-3508ead7bdda", 00:07:54.549 "is_configured": true, 00:07:54.549 "data_offset": 2048, 00:07:54.549 "data_size": 63488 00:07:54.549 }, 00:07:54.549 { 00:07:54.549 "name": "BaseBdev2", 00:07:54.549 "uuid": "099addc0-10ee-11ef-ba60-3508ead7bdda", 00:07:54.549 "is_configured": true, 00:07:54.549 "data_offset": 2048, 00:07:54.549 "data_size": 63488 00:07:54.549 } 00:07:54.549 ] 00:07:54.549 }' 00:07:54.549 06:00:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:54.549 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:54.808 06:00:02 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:54.808 [2024-05-13 06:00:03.082567] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:54.808 06:00:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:55.067 06:00:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:55.067 06:00:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.067 06:00:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.067 06:00:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:55.067 "name": "Existed_Raid", 00:07:55.067 "uuid": "093cd0cd-10ee-11ef-ba60-3508ead7bdda", 00:07:55.067 "strip_size_kb": 0, 00:07:55.067 "state": "online", 00:07:55.067 "raid_level": "raid1", 00:07:55.067 "superblock": true, 00:07:55.067 "num_base_bdevs": 2, 00:07:55.067 "num_base_bdevs_discovered": 1, 00:07:55.067 "num_base_bdevs_operational": 1, 00:07:55.067 "base_bdevs_list": [ 00:07:55.067 { 00:07:55.067 "name": null, 00:07:55.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.067 "is_configured": false, 00:07:55.067 "data_offset": 2048, 00:07:55.067 "data_size": 63488 00:07:55.067 }, 00:07:55.067 { 00:07:55.067 "name": "BaseBdev2", 00:07:55.067 "uuid": "099addc0-10ee-11ef-ba60-3508ead7bdda", 00:07:55.067 "is_configured": true, 00:07:55.067 "data_offset": 2048, 00:07:55.067 "data_size": 63488 00:07:55.067 } 00:07:55.067 ] 00:07:55.067 }' 00:07:55.067 06:00:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:55.067 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.331 06:00:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:07:55.331 06:00:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:55.331 06:00:03 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.331 06:00:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:07:55.591 06:00:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:07:55.591 06:00:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:55.591 06:00:03 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:55.849 [2024-05-13 06:00:03.903721] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:55.849 [2024-05-13 06:00:03.903748] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.849 [2024-05-13 06:00:03.903758] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.849 [2024-05-13 06:00:03.908469] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:55.849 [2024-05-13 06:00:03.908485] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b848a00 name Existed_Raid, state offline 00:07:55.849 06:00:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:07:55.849 06:00:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:07:55.850 06:00:03 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.850 06:00:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:07:55.850 06:00:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:07:55.850 06:00:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:07:55.850 06:00:04 -- bdev/bdev_raid.sh@287 -- # killprocess 48871 00:07:55.850 06:00:04 -- common/autotest_common.sh@926 -- # '[' -z 48871 ']' 00:07:55.850 06:00:04 -- common/autotest_common.sh@930 -- # kill -0 48871 00:07:55.850 06:00:04 -- common/autotest_common.sh@931 -- # uname 00:07:55.850 06:00:04 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:07:55.850 06:00:04 -- common/autotest_common.sh@934 -- # ps -c -o command 48871 00:07:55.850 06:00:04 -- common/autotest_common.sh@934 -- # tail -1 00:07:55.850 06:00:04 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:07:55.850 06:00:04 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:07:55.850 killing process with pid 48871 00:07:55.850 06:00:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48871' 00:07:55.850 06:00:04 -- common/autotest_common.sh@945 -- # kill 48871 00:07:55.850 [2024-05-13 06:00:04.103247] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:55.850 [2024-05-13 06:00:04.103283] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.850 06:00:04 -- common/autotest_common.sh@950 -- # wait 48871 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:07:56.110 00:07:56.110 real 0m6.618s 00:07:56.110 user 0m11.179s 00:07:56.110 sys 0m1.341s 00:07:56.110 06:00:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.110 06:00:04 -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 ************************************ 00:07:56.110 END TEST raid_state_function_test_sb 00:07:56.110 ************************************ 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:56.110 06:00:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:56.110 06:00:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.110 06:00:04 -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 ************************************ 00:07:56.110 START TEST raid_superblock_test 00:07:56.110 ************************************ 00:07:56.110 06:00:04 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@357 -- # raid_pid=49070 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49070 /var/tmp/spdk-raid.sock 00:07:56.110 06:00:04 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:56.110 06:00:04 -- common/autotest_common.sh@819 -- # '[' -z 49070 ']' 00:07:56.110 06:00:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:56.110 06:00:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:56.110 06:00:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:56.110 06:00:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.110 06:00:04 -- common/autotest_common.sh@10 -- # set +x 00:07:56.110 [2024-05-13 06:00:04.310706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:56.110 [2024-05-13 06:00:04.310994] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:07:56.681 EAL: TSC is not safe to use in SMP mode 00:07:56.681 EAL: TSC is not invariant 00:07:56.681 [2024-05-13 06:00:04.735645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.681 [2024-05-13 06:00:04.825174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.681 [2024-05-13 06:00:04.825612] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.681 [2024-05-13 06:00:04.825621] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.941 06:00:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.941 06:00:05 -- common/autotest_common.sh@852 -- # return 0 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.941 06:00:05 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:57.202 malloc1 00:07:57.202 06:00:05 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.468 [2024-05-13 06:00:05.537133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.468 [2024-05-13 06:00:05.537197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.468 [2024-05-13 06:00:05.537706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829a7e780 00:07:57.468 [2024-05-13 06:00:05.537727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.468 [2024-05-13 06:00:05.538404] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.468 [2024-05-13 06:00:05.538434] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.468 pt1 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:57.468 malloc2 00:07:57.468 06:00:05 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:57.728 [2024-05-13 06:00:05.897344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:57.728 [2024-05-13 06:00:05.897394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.728 [2024-05-13 06:00:05.897418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829a7ec80 00:07:57.728 [2024-05-13 06:00:05.897440] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.728 [2024-05-13 06:00:05.897929] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.728 [2024-05-13 06:00:05.897955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:57.728 pt2 00:07:57.728 06:00:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:07:57.728 06:00:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:07:57.728 06:00:05 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:57.987 [2024-05-13 06:00:06.081454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.987 [2024-05-13 06:00:06.081879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:57.987 [2024-05-13 06:00:06.081932] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829a7ef00 00:07:57.987 [2024-05-13 06:00:06.081937] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:57.987 [2024-05-13 06:00:06.081965] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829ae1e20 00:07:57.987 [2024-05-13 06:00:06.082015] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829a7ef00 00:07:57.987 [2024-05-13 06:00:06.082018] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829a7ef00 00:07:57.987 [2024-05-13 06:00:06.082036] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.987 06:00:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.245 06:00:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:07:58.245 "name": "raid_bdev1", 00:07:58.245 "uuid": "0bf7a490-10ee-11ef-ba60-3508ead7bdda", 00:07:58.245 "strip_size_kb": 0, 00:07:58.245 "state": "online", 00:07:58.245 "raid_level": "raid1", 00:07:58.245 "superblock": true, 00:07:58.245 "num_base_bdevs": 2, 00:07:58.245 "num_base_bdevs_discovered": 2, 00:07:58.245 "num_base_bdevs_operational": 2, 00:07:58.245 "base_bdevs_list": [ 00:07:58.245 { 00:07:58.245 "name": "pt1", 00:07:58.245 "uuid": "c6fccc21-00c5-1f57-af71-0814afe6fd9b", 00:07:58.245 "is_configured": true, 00:07:58.245 "data_offset": 2048, 00:07:58.245 "data_size": 63488 00:07:58.245 }, 00:07:58.245 { 00:07:58.245 "name": "pt2", 00:07:58.245 "uuid": "72977da8-fc0e-d354-8e5c-5e856d3603f5", 00:07:58.245 "is_configured": true, 00:07:58.245 "data_offset": 2048, 00:07:58.245 "data_size": 63488 00:07:58.245 } 00:07:58.245 ] 00:07:58.245 }' 00:07:58.245 06:00:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:07:58.245 06:00:06 -- common/autotest_common.sh@10 -- # set +x 00:07:58.504 06:00:06 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:58.504 06:00:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:07:58.504 [2024-05-13 06:00:06.717842] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.504 06:00:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0bf7a490-10ee-11ef-ba60-3508ead7bdda 00:07:58.504 06:00:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 0bf7a490-10ee-11ef-ba60-3508ead7bdda ']' 00:07:58.504 06:00:06 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:58.763 [2024-05-13 06:00:06.897900] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.763 [2024-05-13 06:00:06.897917] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.763 [2024-05-13 06:00:06.897930] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.763 [2024-05-13 06:00:06.897958] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.763 [2024-05-13 06:00:06.897961] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829a7ef00 name raid_bdev1, state offline 00:07:58.763 06:00:06 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.763 06:00:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:07:59.021 06:00:07 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:07:59.021 06:00:07 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:07:59.021 06:00:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.021 06:00:07 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:59.021 06:00:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.021 06:00:07 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:59.279 06:00:07 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:59.280 06:00:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:59.539 06:00:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:07:59.539 06:00:07 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:07:59.539 06:00:07 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.539 06:00:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:07:59.539 06:00:07 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.539 06:00:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.539 06:00:07 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.539 06:00:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.539 06:00:07 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.539 06:00:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.539 06:00:07 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.539 06:00:07 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:59.539 06:00:07 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:07:59.539 [2024-05-13 06:00:07.790425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:59.539 [2024-05-13 06:00:07.790886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:59.539 [2024-05-13 06:00:07.790910] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:07:59.539 [2024-05-13 06:00:07.790952] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:07:59.539 [2024-05-13 06:00:07.790960] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.539 [2024-05-13 06:00:07.790964] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829a7ec80 name raid_bdev1, state configuring 00:07:59.539 request: 00:07:59.539 { 00:07:59.539 "name": "raid_bdev1", 00:07:59.539 "raid_level": "raid1", 00:07:59.539 "base_bdevs": [ 00:07:59.539 "malloc1", 00:07:59.539 "malloc2" 00:07:59.539 ], 00:07:59.539 "superblock": false, 00:07:59.539 "method": "bdev_raid_create", 00:07:59.539 "req_id": 1 00:07:59.539 } 00:07:59.539 Got JSON-RPC error response 00:07:59.539 response: 00:07:59.539 { 00:07:59.539 "code": -17, 00:07:59.539 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:59.539 } 00:07:59.539 06:00:07 -- common/autotest_common.sh@643 -- # es=1 00:07:59.539 06:00:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.539 06:00:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:59.539 06:00:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.539 06:00:07 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:59.539 06:00:07 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:07:59.797 06:00:07 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:07:59.797 06:00:07 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:07:59.797 06:00:07 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:00.057 [2024-05-13 06:00:08.154630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:00.057 [2024-05-13 06:00:08.154686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.057 [2024-05-13 06:00:08.154708] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829a7e780 00:08:00.057 [2024-05-13 06:00:08.154714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.057 [2024-05-13 06:00:08.155200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.057 [2024-05-13 06:00:08.155227] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:00.057 [2024-05-13 06:00:08.155245] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:00.057 [2024-05-13 06:00:08.155253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:00.057 pt1 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:00.057 "name": "raid_bdev1", 00:08:00.057 "uuid": "0bf7a490-10ee-11ef-ba60-3508ead7bdda", 00:08:00.057 "strip_size_kb": 0, 00:08:00.057 "state": "configuring", 00:08:00.057 "raid_level": "raid1", 00:08:00.057 "superblock": true, 00:08:00.057 "num_base_bdevs": 2, 00:08:00.057 "num_base_bdevs_discovered": 1, 00:08:00.057 "num_base_bdevs_operational": 2, 00:08:00.057 "base_bdevs_list": [ 00:08:00.057 { 00:08:00.057 "name": "pt1", 00:08:00.057 "uuid": "c6fccc21-00c5-1f57-af71-0814afe6fd9b", 00:08:00.057 "is_configured": true, 00:08:00.057 "data_offset": 2048, 00:08:00.057 "data_size": 63488 00:08:00.057 }, 00:08:00.057 { 00:08:00.057 "name": null, 00:08:00.057 "uuid": "72977da8-fc0e-d354-8e5c-5e856d3603f5", 00:08:00.057 "is_configured": false, 00:08:00.057 "data_offset": 2048, 00:08:00.057 "data_size": 63488 00:08:00.057 } 00:08:00.057 ] 00:08:00.057 }' 00:08:00.057 06:00:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:00.057 06:00:08 -- common/autotest_common.sh@10 -- # set +x 00:08:00.316 06:00:08 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:08:00.316 06:00:08 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:00.316 06:00:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:00.316 06:00:08 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:00.575 [2024-05-13 06:00:08.771000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:00.575 [2024-05-13 06:00:08.771044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:00.575 [2024-05-13 06:00:08.771068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829a7ef00 00:08:00.575 [2024-05-13 06:00:08.771074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:00.575 [2024-05-13 06:00:08.771159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:00.575 [2024-05-13 06:00:08.771166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:00.575 [2024-05-13 06:00:08.771182] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:00.575 [2024-05-13 06:00:08.771188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:00.575 [2024-05-13 06:00:08.771208] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829a7f180 00:08:00.575 [2024-05-13 06:00:08.771211] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:00.575 [2024-05-13 06:00:08.771225] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829ae1e20 00:08:00.575 [2024-05-13 06:00:08.771260] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829a7f180 00:08:00.575 [2024-05-13 06:00:08.771263] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829a7f180 00:08:00.575 [2024-05-13 06:00:08.771279] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.575 pt2 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.575 06:00:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:00.833 06:00:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:00.833 "name": "raid_bdev1", 00:08:00.833 "uuid": "0bf7a490-10ee-11ef-ba60-3508ead7bdda", 00:08:00.833 "strip_size_kb": 0, 00:08:00.833 "state": "online", 00:08:00.833 "raid_level": "raid1", 00:08:00.833 "superblock": true, 00:08:00.833 "num_base_bdevs": 2, 00:08:00.833 "num_base_bdevs_discovered": 2, 00:08:00.833 "num_base_bdevs_operational": 2, 00:08:00.833 "base_bdevs_list": [ 00:08:00.833 { 00:08:00.833 "name": "pt1", 00:08:00.833 "uuid": "c6fccc21-00c5-1f57-af71-0814afe6fd9b", 00:08:00.833 "is_configured": true, 00:08:00.833 "data_offset": 2048, 00:08:00.833 "data_size": 63488 00:08:00.833 }, 00:08:00.833 { 00:08:00.833 "name": "pt2", 00:08:00.833 "uuid": "72977da8-fc0e-d354-8e5c-5e856d3603f5", 00:08:00.833 "is_configured": true, 00:08:00.833 "data_offset": 2048, 00:08:00.833 "data_size": 63488 00:08:00.833 } 00:08:00.833 ] 00:08:00.833 }' 00:08:00.833 06:00:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:00.833 06:00:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.092 06:00:09 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:01.092 06:00:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:01.351 [2024-05-13 06:00:09.395383] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@430 -- # '[' 0bf7a490-10ee-11ef-ba60-3508ead7bdda '!=' 0bf7a490-10ee-11ef-ba60-3508ead7bdda ']' 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:01.351 [2024-05-13 06:00:09.575463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:01.351 06:00:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.610 06:00:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:01.610 "name": "raid_bdev1", 00:08:01.610 "uuid": "0bf7a490-10ee-11ef-ba60-3508ead7bdda", 00:08:01.610 "strip_size_kb": 0, 00:08:01.610 "state": "online", 00:08:01.610 "raid_level": "raid1", 00:08:01.610 "superblock": true, 00:08:01.610 "num_base_bdevs": 2, 00:08:01.610 "num_base_bdevs_discovered": 1, 00:08:01.610 "num_base_bdevs_operational": 1, 00:08:01.610 "base_bdevs_list": [ 00:08:01.610 { 00:08:01.610 "name": null, 00:08:01.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.610 "is_configured": false, 00:08:01.610 "data_offset": 2048, 00:08:01.610 "data_size": 63488 00:08:01.610 }, 00:08:01.610 { 00:08:01.610 "name": "pt2", 00:08:01.610 "uuid": "72977da8-fc0e-d354-8e5c-5e856d3603f5", 00:08:01.610 "is_configured": true, 00:08:01.610 "data_offset": 2048, 00:08:01.610 "data_size": 63488 00:08:01.610 } 00:08:01.610 ] 00:08:01.610 }' 00:08:01.610 06:00:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:01.610 06:00:09 -- common/autotest_common.sh@10 -- # set +x 00:08:01.868 06:00:10 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:02.130 [2024-05-13 06:00:10.179805] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.130 [2024-05-13 06:00:10.179826] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.130 [2024-05-13 06:00:10.179840] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.130 [2024-05-13 06:00:10.179848] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.130 [2024-05-13 06:00:10.179851] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829a7f180 name raid_bdev1, state offline 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:02.130 06:00:10 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:02.391 06:00:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:08:02.391 06:00:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:08:02.391 06:00:10 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:08:02.391 06:00:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:08:02.391 06:00:10 -- bdev/bdev_raid.sh@462 -- # i=1 00:08:02.391 06:00:10 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.650 [2024-05-13 06:00:10.704114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.650 [2024-05-13 06:00:10.704185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.650 [2024-05-13 06:00:10.704208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829a7ef00 00:08:02.650 [2024-05-13 06:00:10.704213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.650 [2024-05-13 06:00:10.704715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.650 [2024-05-13 06:00:10.704740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.650 [2024-05-13 06:00:10.704758] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:02.650 [2024-05-13 06:00:10.704767] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.650 [2024-05-13 06:00:10.704784] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829a7f180 00:08:02.650 [2024-05-13 06:00:10.704787] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.650 [2024-05-13 06:00:10.704803] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829ae1e20 00:08:02.650 [2024-05-13 06:00:10.704834] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829a7f180 00:08:02.650 [2024-05-13 06:00:10.704837] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829a7f180 00:08:02.650 [2024-05-13 06:00:10.704853] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.650 pt2 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:02.650 "name": "raid_bdev1", 00:08:02.650 "uuid": "0bf7a490-10ee-11ef-ba60-3508ead7bdda", 00:08:02.650 "strip_size_kb": 0, 00:08:02.650 "state": "online", 00:08:02.650 "raid_level": "raid1", 00:08:02.650 "superblock": true, 00:08:02.650 "num_base_bdevs": 2, 00:08:02.650 "num_base_bdevs_discovered": 1, 00:08:02.650 "num_base_bdevs_operational": 1, 00:08:02.650 "base_bdevs_list": [ 00:08:02.650 { 00:08:02.650 "name": null, 00:08:02.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.650 "is_configured": false, 00:08:02.650 "data_offset": 2048, 00:08:02.650 "data_size": 63488 00:08:02.650 }, 00:08:02.650 { 00:08:02.650 "name": "pt2", 00:08:02.650 "uuid": "72977da8-fc0e-d354-8e5c-5e856d3603f5", 00:08:02.650 "is_configured": true, 00:08:02.650 "data_offset": 2048, 00:08:02.650 "data_size": 63488 00:08:02.650 } 00:08:02.650 ] 00:08:02.650 }' 00:08:02.650 06:00:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:02.650 06:00:10 -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 06:00:11 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:08:02.909 06:00:11 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:02.909 06:00:11 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:08:03.169 [2024-05-13 06:00:11.328495] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.169 06:00:11 -- bdev/bdev_raid.sh@506 -- # '[' 0bf7a490-10ee-11ef-ba60-3508ead7bdda '!=' 0bf7a490-10ee-11ef-ba60-3508ead7bdda ']' 00:08:03.169 06:00:11 -- bdev/bdev_raid.sh@511 -- # killprocess 49070 00:08:03.169 06:00:11 -- common/autotest_common.sh@926 -- # '[' -z 49070 ']' 00:08:03.169 06:00:11 -- common/autotest_common.sh@930 -- # kill -0 49070 00:08:03.169 06:00:11 -- common/autotest_common.sh@931 -- # uname 00:08:03.169 06:00:11 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:03.169 06:00:11 -- common/autotest_common.sh@934 -- # ps -c -o command 49070 00:08:03.169 06:00:11 -- common/autotest_common.sh@934 -- # tail -1 00:08:03.169 06:00:11 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:03.169 06:00:11 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:03.169 killing process with pid 49070 00:08:03.169 06:00:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49070' 00:08:03.169 06:00:11 -- common/autotest_common.sh@945 -- # kill 49070 00:08:03.169 [2024-05-13 06:00:11.358210] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.169 [2024-05-13 06:00:11.358226] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.169 [2024-05-13 06:00:11.358246] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.169 [2024-05-13 06:00:11.358249] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829a7f180 name raid_bdev1, state offline 00:08:03.169 06:00:11 -- common/autotest_common.sh@950 -- # wait 49070 00:08:03.169 [2024-05-13 06:00:11.367717] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:03.428 00:08:03.428 real 0m7.209s 00:08:03.428 user 0m12.302s 00:08:03.428 sys 0m1.462s 00:08:03.428 06:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.428 06:00:11 -- common/autotest_common.sh@10 -- # set +x 00:08:03.428 ************************************ 00:08:03.428 END TEST raid_superblock_test 00:08:03.428 ************************************ 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:03.428 06:00:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:03.428 06:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.428 06:00:11 -- common/autotest_common.sh@10 -- # set +x 00:08:03.428 ************************************ 00:08:03.428 START TEST raid_state_function_test 00:08:03.428 ************************************ 00:08:03.428 06:00:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=49285 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49285' 00:08:03.428 Process raid pid: 49285 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:03.428 06:00:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49285 /var/tmp/spdk-raid.sock 00:08:03.428 06:00:11 -- common/autotest_common.sh@819 -- # '[' -z 49285 ']' 00:08:03.428 06:00:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:03.428 06:00:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:03.428 06:00:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:03.428 06:00:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.428 06:00:11 -- common/autotest_common.sh@10 -- # set +x 00:08:03.429 [2024-05-13 06:00:11.578955] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:03.429 [2024-05-13 06:00:11.579293] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:03.998 EAL: TSC is not safe to use in SMP mode 00:08:03.998 EAL: TSC is not invariant 00:08:03.998 [2024-05-13 06:00:12.000633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.998 [2024-05-13 06:00:12.087409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.998 [2024-05-13 06:00:12.087834] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.998 [2024-05-13 06:00:12.087843] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.259 06:00:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.259 06:00:12 -- common/autotest_common.sh@852 -- # return 0 00:08:04.259 06:00:12 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:04.518 [2024-05-13 06:00:12.631163] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.518 [2024-05-13 06:00:12.631230] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.518 [2024-05-13 06:00:12.631234] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.518 [2024-05-13 06:00:12.631240] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.518 [2024-05-13 06:00:12.631242] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.518 [2024-05-13 06:00:12.631248] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.518 06:00:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.787 06:00:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:04.787 "name": "Existed_Raid", 00:08:04.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.787 "strip_size_kb": 64, 00:08:04.787 "state": "configuring", 00:08:04.787 "raid_level": "raid0", 00:08:04.787 "superblock": false, 00:08:04.787 "num_base_bdevs": 3, 00:08:04.787 "num_base_bdevs_discovered": 0, 00:08:04.787 "num_base_bdevs_operational": 3, 00:08:04.787 "base_bdevs_list": [ 00:08:04.787 { 00:08:04.787 "name": "BaseBdev1", 00:08:04.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.787 "is_configured": false, 00:08:04.787 "data_offset": 0, 00:08:04.787 "data_size": 0 00:08:04.787 }, 00:08:04.787 { 00:08:04.787 "name": "BaseBdev2", 00:08:04.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.787 "is_configured": false, 00:08:04.787 "data_offset": 0, 00:08:04.787 "data_size": 0 00:08:04.787 }, 00:08:04.787 { 00:08:04.787 "name": "BaseBdev3", 00:08:04.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.787 "is_configured": false, 00:08:04.787 "data_offset": 0, 00:08:04.787 "data_size": 0 00:08:04.787 } 00:08:04.787 ] 00:08:04.787 }' 00:08:04.787 06:00:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:04.787 06:00:12 -- common/autotest_common.sh@10 -- # set +x 00:08:05.046 06:00:13 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:05.047 [2024-05-13 06:00:13.271511] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.047 [2024-05-13 06:00:13.271533] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bbef500 name Existed_Raid, state configuring 00:08:05.047 06:00:13 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:05.305 [2024-05-13 06:00:13.451618] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.306 [2024-05-13 06:00:13.451659] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.306 [2024-05-13 06:00:13.451663] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.306 [2024-05-13 06:00:13.451669] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.306 [2024-05-13 06:00:13.451671] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.306 [2024-05-13 06:00:13.451677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.306 06:00:13 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.564 [2024-05-13 06:00:13.632478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.564 BaseBdev1 00:08:05.564 06:00:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:05.564 06:00:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:05.564 06:00:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:05.564 06:00:13 -- common/autotest_common.sh@889 -- # local i 00:08:05.564 06:00:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:05.564 06:00:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:05.564 06:00:13 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:05.564 06:00:13 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.824 [ 00:08:05.824 { 00:08:05.824 "name": "BaseBdev1", 00:08:05.824 "aliases": [ 00:08:05.824 "1077b96f-10ee-11ef-ba60-3508ead7bdda" 00:08:05.824 ], 00:08:05.824 "product_name": "Malloc disk", 00:08:05.824 "block_size": 512, 00:08:05.824 "num_blocks": 65536, 00:08:05.824 "uuid": "1077b96f-10ee-11ef-ba60-3508ead7bdda", 00:08:05.824 "assigned_rate_limits": { 00:08:05.824 "rw_ios_per_sec": 0, 00:08:05.824 "rw_mbytes_per_sec": 0, 00:08:05.824 "r_mbytes_per_sec": 0, 00:08:05.824 "w_mbytes_per_sec": 0 00:08:05.824 }, 00:08:05.824 "claimed": true, 00:08:05.824 "claim_type": "exclusive_write", 00:08:05.824 "zoned": false, 00:08:05.824 "supported_io_types": { 00:08:05.824 "read": true, 00:08:05.824 "write": true, 00:08:05.824 "unmap": true, 00:08:05.824 "write_zeroes": true, 00:08:05.824 "flush": true, 00:08:05.824 "reset": true, 00:08:05.824 "compare": false, 00:08:05.824 "compare_and_write": false, 00:08:05.824 "abort": true, 00:08:05.824 "nvme_admin": false, 00:08:05.824 "nvme_io": false 00:08:05.824 }, 00:08:05.824 "memory_domains": [ 00:08:05.824 { 00:08:05.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.824 "dma_device_type": 2 00:08:05.824 } 00:08:05.824 ], 00:08:05.824 "driver_specific": {} 00:08:05.824 } 00:08:05.824 ] 00:08:05.824 06:00:14 -- common/autotest_common.sh@895 -- # return 0 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.824 06:00:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.083 06:00:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:06.083 "name": "Existed_Raid", 00:08:06.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.083 "strip_size_kb": 64, 00:08:06.083 "state": "configuring", 00:08:06.083 "raid_level": "raid0", 00:08:06.083 "superblock": false, 00:08:06.083 "num_base_bdevs": 3, 00:08:06.083 "num_base_bdevs_discovered": 1, 00:08:06.083 "num_base_bdevs_operational": 3, 00:08:06.083 "base_bdevs_list": [ 00:08:06.083 { 00:08:06.083 "name": "BaseBdev1", 00:08:06.083 "uuid": "1077b96f-10ee-11ef-ba60-3508ead7bdda", 00:08:06.083 "is_configured": true, 00:08:06.083 "data_offset": 0, 00:08:06.083 "data_size": 65536 00:08:06.083 }, 00:08:06.083 { 00:08:06.083 "name": "BaseBdev2", 00:08:06.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.083 "is_configured": false, 00:08:06.083 "data_offset": 0, 00:08:06.083 "data_size": 0 00:08:06.083 }, 00:08:06.083 { 00:08:06.083 "name": "BaseBdev3", 00:08:06.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.083 "is_configured": false, 00:08:06.083 "data_offset": 0, 00:08:06.083 "data_size": 0 00:08:06.083 } 00:08:06.083 ] 00:08:06.083 }' 00:08:06.084 06:00:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:06.084 06:00:14 -- common/autotest_common.sh@10 -- # set +x 00:08:06.343 06:00:14 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:06.343 [2024-05-13 06:00:14.632275] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.343 [2024-05-13 06:00:14.632300] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bbef500 name Existed_Raid, state configuring 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:06.602 [2024-05-13 06:00:14.812388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.602 [2024-05-13 06:00:14.813029] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.602 [2024-05-13 06:00:14.813067] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.602 [2024-05-13 06:00:14.813071] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:06.602 [2024-05-13 06:00:14.813077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.602 06:00:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.860 06:00:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:06.860 "name": "Existed_Raid", 00:08:06.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.860 "strip_size_kb": 64, 00:08:06.860 "state": "configuring", 00:08:06.860 "raid_level": "raid0", 00:08:06.860 "superblock": false, 00:08:06.860 "num_base_bdevs": 3, 00:08:06.860 "num_base_bdevs_discovered": 1, 00:08:06.860 "num_base_bdevs_operational": 3, 00:08:06.860 "base_bdevs_list": [ 00:08:06.860 { 00:08:06.860 "name": "BaseBdev1", 00:08:06.860 "uuid": "1077b96f-10ee-11ef-ba60-3508ead7bdda", 00:08:06.860 "is_configured": true, 00:08:06.860 "data_offset": 0, 00:08:06.860 "data_size": 65536 00:08:06.860 }, 00:08:06.860 { 00:08:06.860 "name": "BaseBdev2", 00:08:06.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.860 "is_configured": false, 00:08:06.860 "data_offset": 0, 00:08:06.860 "data_size": 0 00:08:06.860 }, 00:08:06.860 { 00:08:06.860 "name": "BaseBdev3", 00:08:06.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.860 "is_configured": false, 00:08:06.860 "data_offset": 0, 00:08:06.860 "data_size": 0 00:08:06.860 } 00:08:06.860 ] 00:08:06.860 }' 00:08:06.860 06:00:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:06.860 06:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:07.118 06:00:15 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.377 [2024-05-13 06:00:15.444824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.377 BaseBdev2 00:08:07.377 06:00:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:07.377 06:00:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:07.377 06:00:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:07.377 06:00:15 -- common/autotest_common.sh@889 -- # local i 00:08:07.377 06:00:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:07.377 06:00:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:07.377 06:00:15 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:07.377 06:00:15 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.635 [ 00:08:07.635 { 00:08:07.635 "name": "BaseBdev2", 00:08:07.635 "aliases": [ 00:08:07.635 "118c5d9e-10ee-11ef-ba60-3508ead7bdda" 00:08:07.635 ], 00:08:07.635 "product_name": "Malloc disk", 00:08:07.635 "block_size": 512, 00:08:07.635 "num_blocks": 65536, 00:08:07.635 "uuid": "118c5d9e-10ee-11ef-ba60-3508ead7bdda", 00:08:07.635 "assigned_rate_limits": { 00:08:07.635 "rw_ios_per_sec": 0, 00:08:07.635 "rw_mbytes_per_sec": 0, 00:08:07.635 "r_mbytes_per_sec": 0, 00:08:07.635 "w_mbytes_per_sec": 0 00:08:07.635 }, 00:08:07.635 "claimed": true, 00:08:07.635 "claim_type": "exclusive_write", 00:08:07.635 "zoned": false, 00:08:07.635 "supported_io_types": { 00:08:07.635 "read": true, 00:08:07.635 "write": true, 00:08:07.635 "unmap": true, 00:08:07.635 "write_zeroes": true, 00:08:07.635 "flush": true, 00:08:07.635 "reset": true, 00:08:07.635 "compare": false, 00:08:07.635 "compare_and_write": false, 00:08:07.635 "abort": true, 00:08:07.635 "nvme_admin": false, 00:08:07.635 "nvme_io": false 00:08:07.635 }, 00:08:07.635 "memory_domains": [ 00:08:07.635 { 00:08:07.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.635 "dma_device_type": 2 00:08:07.635 } 00:08:07.635 ], 00:08:07.635 "driver_specific": {} 00:08:07.635 } 00:08:07.635 ] 00:08:07.635 06:00:15 -- common/autotest_common.sh@895 -- # return 0 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.635 06:00:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.894 06:00:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:07.894 "name": "Existed_Raid", 00:08:07.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.894 "strip_size_kb": 64, 00:08:07.894 "state": "configuring", 00:08:07.894 "raid_level": "raid0", 00:08:07.894 "superblock": false, 00:08:07.894 "num_base_bdevs": 3, 00:08:07.894 "num_base_bdevs_discovered": 2, 00:08:07.894 "num_base_bdevs_operational": 3, 00:08:07.894 "base_bdevs_list": [ 00:08:07.894 { 00:08:07.894 "name": "BaseBdev1", 00:08:07.894 "uuid": "1077b96f-10ee-11ef-ba60-3508ead7bdda", 00:08:07.894 "is_configured": true, 00:08:07.894 "data_offset": 0, 00:08:07.894 "data_size": 65536 00:08:07.894 }, 00:08:07.894 { 00:08:07.895 "name": "BaseBdev2", 00:08:07.895 "uuid": "118c5d9e-10ee-11ef-ba60-3508ead7bdda", 00:08:07.895 "is_configured": true, 00:08:07.895 "data_offset": 0, 00:08:07.895 "data_size": 65536 00:08:07.895 }, 00:08:07.895 { 00:08:07.895 "name": "BaseBdev3", 00:08:07.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.895 "is_configured": false, 00:08:07.895 "data_offset": 0, 00:08:07.895 "data_size": 0 00:08:07.895 } 00:08:07.895 ] 00:08:07.895 }' 00:08:07.895 06:00:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:07.895 06:00:15 -- common/autotest_common.sh@10 -- # set +x 00:08:08.154 06:00:16 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:08.154 [2024-05-13 06:00:16.409335] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:08.154 [2024-05-13 06:00:16.409358] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bbefa00 00:08:08.154 [2024-05-13 06:00:16.409361] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:08.154 [2024-05-13 06:00:16.409393] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bc52ec0 00:08:08.154 [2024-05-13 06:00:16.409464] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bbefa00 00:08:08.154 [2024-05-13 06:00:16.409467] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bbefa00 00:08:08.154 [2024-05-13 06:00:16.409491] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.154 BaseBdev3 00:08:08.154 06:00:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:08.154 06:00:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:08.154 06:00:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:08.154 06:00:16 -- common/autotest_common.sh@889 -- # local i 00:08:08.154 06:00:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:08.154 06:00:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:08.154 06:00:16 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:08.414 06:00:16 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:08.673 [ 00:08:08.673 { 00:08:08.673 "name": "BaseBdev3", 00:08:08.673 "aliases": [ 00:08:08.673 "121f8a1c-10ee-11ef-ba60-3508ead7bdda" 00:08:08.673 ], 00:08:08.673 "product_name": "Malloc disk", 00:08:08.673 "block_size": 512, 00:08:08.673 "num_blocks": 65536, 00:08:08.673 "uuid": "121f8a1c-10ee-11ef-ba60-3508ead7bdda", 00:08:08.673 "assigned_rate_limits": { 00:08:08.673 "rw_ios_per_sec": 0, 00:08:08.673 "rw_mbytes_per_sec": 0, 00:08:08.673 "r_mbytes_per_sec": 0, 00:08:08.673 "w_mbytes_per_sec": 0 00:08:08.673 }, 00:08:08.673 "claimed": true, 00:08:08.673 "claim_type": "exclusive_write", 00:08:08.673 "zoned": false, 00:08:08.673 "supported_io_types": { 00:08:08.673 "read": true, 00:08:08.673 "write": true, 00:08:08.673 "unmap": true, 00:08:08.673 "write_zeroes": true, 00:08:08.673 "flush": true, 00:08:08.673 "reset": true, 00:08:08.673 "compare": false, 00:08:08.673 "compare_and_write": false, 00:08:08.673 "abort": true, 00:08:08.673 "nvme_admin": false, 00:08:08.674 "nvme_io": false 00:08:08.674 }, 00:08:08.674 "memory_domains": [ 00:08:08.674 { 00:08:08.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.674 "dma_device_type": 2 00:08:08.674 } 00:08:08.674 ], 00:08:08.674 "driver_specific": {} 00:08:08.674 } 00:08:08.674 ] 00:08:08.674 06:00:16 -- common/autotest_common.sh@895 -- # return 0 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:08.674 "name": "Existed_Raid", 00:08:08.674 "uuid": "121f8ed6-10ee-11ef-ba60-3508ead7bdda", 00:08:08.674 "strip_size_kb": 64, 00:08:08.674 "state": "online", 00:08:08.674 "raid_level": "raid0", 00:08:08.674 "superblock": false, 00:08:08.674 "num_base_bdevs": 3, 00:08:08.674 "num_base_bdevs_discovered": 3, 00:08:08.674 "num_base_bdevs_operational": 3, 00:08:08.674 "base_bdevs_list": [ 00:08:08.674 { 00:08:08.674 "name": "BaseBdev1", 00:08:08.674 "uuid": "1077b96f-10ee-11ef-ba60-3508ead7bdda", 00:08:08.674 "is_configured": true, 00:08:08.674 "data_offset": 0, 00:08:08.674 "data_size": 65536 00:08:08.674 }, 00:08:08.674 { 00:08:08.674 "name": "BaseBdev2", 00:08:08.674 "uuid": "118c5d9e-10ee-11ef-ba60-3508ead7bdda", 00:08:08.674 "is_configured": true, 00:08:08.674 "data_offset": 0, 00:08:08.674 "data_size": 65536 00:08:08.674 }, 00:08:08.674 { 00:08:08.674 "name": "BaseBdev3", 00:08:08.674 "uuid": "121f8a1c-10ee-11ef-ba60-3508ead7bdda", 00:08:08.674 "is_configured": true, 00:08:08.674 "data_offset": 0, 00:08:08.674 "data_size": 65536 00:08:08.674 } 00:08:08.674 ] 00:08:08.674 }' 00:08:08.674 06:00:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:08.674 06:00:16 -- common/autotest_common.sh@10 -- # set +x 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:09.241 [2024-05-13 06:00:17.405780] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.241 [2024-05-13 06:00:17.405801] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.241 [2024-05-13 06:00:17.405813] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.241 06:00:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.508 06:00:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:09.508 "name": "Existed_Raid", 00:08:09.508 "uuid": "121f8ed6-10ee-11ef-ba60-3508ead7bdda", 00:08:09.508 "strip_size_kb": 64, 00:08:09.508 "state": "offline", 00:08:09.508 "raid_level": "raid0", 00:08:09.508 "superblock": false, 00:08:09.508 "num_base_bdevs": 3, 00:08:09.508 "num_base_bdevs_discovered": 2, 00:08:09.508 "num_base_bdevs_operational": 2, 00:08:09.508 "base_bdevs_list": [ 00:08:09.508 { 00:08:09.508 "name": null, 00:08:09.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.508 "is_configured": false, 00:08:09.508 "data_offset": 0, 00:08:09.508 "data_size": 65536 00:08:09.508 }, 00:08:09.508 { 00:08:09.508 "name": "BaseBdev2", 00:08:09.508 "uuid": "118c5d9e-10ee-11ef-ba60-3508ead7bdda", 00:08:09.508 "is_configured": true, 00:08:09.508 "data_offset": 0, 00:08:09.508 "data_size": 65536 00:08:09.508 }, 00:08:09.508 { 00:08:09.508 "name": "BaseBdev3", 00:08:09.508 "uuid": "121f8a1c-10ee-11ef-ba60-3508ead7bdda", 00:08:09.508 "is_configured": true, 00:08:09.508 "data_offset": 0, 00:08:09.508 "data_size": 65536 00:08:09.508 } 00:08:09.508 ] 00:08:09.508 }' 00:08:09.508 06:00:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:09.508 06:00:17 -- common/autotest_common.sh@10 -- # set +x 00:08:09.775 06:00:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:09.775 06:00:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:09.775 06:00:17 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.775 06:00:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:09.775 06:00:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:09.775 06:00:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.775 06:00:18 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:10.037 [2024-05-13 06:00:18.207050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:10.037 06:00:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:10.037 06:00:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:10.037 06:00:18 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.037 06:00:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:10.299 [2024-05-13 06:00:18.571904] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:10.299 [2024-05-13 06:00:18.571943] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bbefa00 name Existed_Raid, state offline 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.299 06:00:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:10.559 06:00:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:10.559 06:00:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:10.559 06:00:18 -- bdev/bdev_raid.sh@287 -- # killprocess 49285 00:08:10.560 06:00:18 -- common/autotest_common.sh@926 -- # '[' -z 49285 ']' 00:08:10.560 06:00:18 -- common/autotest_common.sh@930 -- # kill -0 49285 00:08:10.560 06:00:18 -- common/autotest_common.sh@931 -- # uname 00:08:10.560 06:00:18 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:10.560 06:00:18 -- common/autotest_common.sh@934 -- # ps -c -o command 49285 00:08:10.560 06:00:18 -- common/autotest_common.sh@934 -- # tail -1 00:08:10.560 06:00:18 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:10.560 06:00:18 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:10.560 killing process with pid 49285 00:08:10.560 06:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49285' 00:08:10.560 06:00:18 -- common/autotest_common.sh@945 -- # kill 49285 00:08:10.560 [2024-05-13 06:00:18.782860] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:10.560 [2024-05-13 06:00:18.782900] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.560 06:00:18 -- common/autotest_common.sh@950 -- # wait 49285 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:10.819 00:08:10.819 real 0m7.361s 00:08:10.819 user 0m12.506s 00:08:10.819 sys 0m1.548s 00:08:10.819 06:00:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.819 06:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:10.819 ************************************ 00:08:10.819 END TEST raid_state_function_test 00:08:10.819 ************************************ 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:10.819 06:00:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:10.819 06:00:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.819 06:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:10.819 ************************************ 00:08:10.819 START TEST raid_state_function_test_sb 00:08:10.819 ************************************ 00:08:10.819 06:00:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=49518 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49518' 00:08:10.819 Process raid pid: 49518 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:10.819 06:00:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49518 /var/tmp/spdk-raid.sock 00:08:10.819 06:00:18 -- common/autotest_common.sh@819 -- # '[' -z 49518 ']' 00:08:10.819 06:00:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:10.819 06:00:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.819 06:00:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:10.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:10.819 06:00:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.819 06:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:10.819 [2024-05-13 06:00:18.991564] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:10.819 [2024-05-13 06:00:18.991913] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:11.388 EAL: TSC is not safe to use in SMP mode 00:08:11.388 EAL: TSC is not invariant 00:08:11.388 [2024-05-13 06:00:19.431816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.388 [2024-05-13 06:00:19.522843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.388 [2024-05-13 06:00:19.523277] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.388 [2024-05-13 06:00:19.523286] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.647 06:00:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.647 06:00:19 -- common/autotest_common.sh@852 -- # return 0 00:08:11.647 06:00:19 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:11.906 [2024-05-13 06:00:20.034511] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.906 [2024-05-13 06:00:20.034568] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.906 [2024-05-13 06:00:20.034572] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.906 [2024-05-13 06:00:20.034578] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.906 [2024-05-13 06:00:20.034580] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.906 [2024-05-13 06:00:20.034586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.906 06:00:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.165 06:00:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:12.165 "name": "Existed_Raid", 00:08:12.165 "uuid": "1448b5f0-10ee-11ef-ba60-3508ead7bdda", 00:08:12.165 "strip_size_kb": 64, 00:08:12.165 "state": "configuring", 00:08:12.165 "raid_level": "raid0", 00:08:12.165 "superblock": true, 00:08:12.165 "num_base_bdevs": 3, 00:08:12.165 "num_base_bdevs_discovered": 0, 00:08:12.165 "num_base_bdevs_operational": 3, 00:08:12.165 "base_bdevs_list": [ 00:08:12.165 { 00:08:12.165 "name": "BaseBdev1", 00:08:12.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.165 "is_configured": false, 00:08:12.165 "data_offset": 0, 00:08:12.165 "data_size": 0 00:08:12.165 }, 00:08:12.165 { 00:08:12.165 "name": "BaseBdev2", 00:08:12.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.165 "is_configured": false, 00:08:12.165 "data_offset": 0, 00:08:12.165 "data_size": 0 00:08:12.165 }, 00:08:12.165 { 00:08:12.165 "name": "BaseBdev3", 00:08:12.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.165 "is_configured": false, 00:08:12.165 "data_offset": 0, 00:08:12.165 "data_size": 0 00:08:12.165 } 00:08:12.165 ] 00:08:12.165 }' 00:08:12.165 06:00:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:12.165 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:08:12.424 06:00:20 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:12.424 [2024-05-13 06:00:20.662776] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.424 [2024-05-13 06:00:20.662797] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b03a500 name Existed_Raid, state configuring 00:08:12.424 06:00:20 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:12.683 [2024-05-13 06:00:20.854871] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.683 [2024-05-13 06:00:20.854904] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.683 [2024-05-13 06:00:20.854923] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.684 [2024-05-13 06:00:20.854929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.684 [2024-05-13 06:00:20.854932] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:12.684 [2024-05-13 06:00:20.854937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:12.684 06:00:20 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:12.943 [2024-05-13 06:00:21.039735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.943 BaseBdev1 00:08:12.943 06:00:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:12.943 06:00:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:12.943 06:00:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:12.943 06:00:21 -- common/autotest_common.sh@889 -- # local i 00:08:12.943 06:00:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:12.943 06:00:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:12.943 06:00:21 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:12.943 06:00:21 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.202 [ 00:08:13.202 { 00:08:13.202 "name": "BaseBdev1", 00:08:13.202 "aliases": [ 00:08:13.202 "14e1fa5a-10ee-11ef-ba60-3508ead7bdda" 00:08:13.202 ], 00:08:13.202 "product_name": "Malloc disk", 00:08:13.202 "block_size": 512, 00:08:13.202 "num_blocks": 65536, 00:08:13.202 "uuid": "14e1fa5a-10ee-11ef-ba60-3508ead7bdda", 00:08:13.202 "assigned_rate_limits": { 00:08:13.202 "rw_ios_per_sec": 0, 00:08:13.202 "rw_mbytes_per_sec": 0, 00:08:13.202 "r_mbytes_per_sec": 0, 00:08:13.202 "w_mbytes_per_sec": 0 00:08:13.202 }, 00:08:13.202 "claimed": true, 00:08:13.202 "claim_type": "exclusive_write", 00:08:13.202 "zoned": false, 00:08:13.202 "supported_io_types": { 00:08:13.202 "read": true, 00:08:13.202 "write": true, 00:08:13.202 "unmap": true, 00:08:13.203 "write_zeroes": true, 00:08:13.203 "flush": true, 00:08:13.203 "reset": true, 00:08:13.203 "compare": false, 00:08:13.203 "compare_and_write": false, 00:08:13.203 "abort": true, 00:08:13.203 "nvme_admin": false, 00:08:13.203 "nvme_io": false 00:08:13.203 }, 00:08:13.203 "memory_domains": [ 00:08:13.203 { 00:08:13.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.203 "dma_device_type": 2 00:08:13.203 } 00:08:13.203 ], 00:08:13.203 "driver_specific": {} 00:08:13.203 } 00:08:13.203 ] 00:08:13.203 06:00:21 -- common/autotest_common.sh@895 -- # return 0 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.203 06:00:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.468 06:00:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:13.468 "name": "Existed_Raid", 00:08:13.468 "uuid": "14c5e361-10ee-11ef-ba60-3508ead7bdda", 00:08:13.468 "strip_size_kb": 64, 00:08:13.468 "state": "configuring", 00:08:13.468 "raid_level": "raid0", 00:08:13.468 "superblock": true, 00:08:13.468 "num_base_bdevs": 3, 00:08:13.468 "num_base_bdevs_discovered": 1, 00:08:13.468 "num_base_bdevs_operational": 3, 00:08:13.468 "base_bdevs_list": [ 00:08:13.468 { 00:08:13.468 "name": "BaseBdev1", 00:08:13.468 "uuid": "14e1fa5a-10ee-11ef-ba60-3508ead7bdda", 00:08:13.468 "is_configured": true, 00:08:13.468 "data_offset": 2048, 00:08:13.468 "data_size": 63488 00:08:13.468 }, 00:08:13.468 { 00:08:13.468 "name": "BaseBdev2", 00:08:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.468 "is_configured": false, 00:08:13.468 "data_offset": 0, 00:08:13.468 "data_size": 0 00:08:13.468 }, 00:08:13.468 { 00:08:13.468 "name": "BaseBdev3", 00:08:13.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.468 "is_configured": false, 00:08:13.468 "data_offset": 0, 00:08:13.468 "data_size": 0 00:08:13.468 } 00:08:13.468 ] 00:08:13.468 }' 00:08:13.468 06:00:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:13.468 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:08:13.730 06:00:21 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:13.730 [2024-05-13 06:00:22.003422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.730 [2024-05-13 06:00:22.003457] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b03a500 name Existed_Raid, state configuring 00:08:13.730 06:00:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:13.730 06:00:22 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:13.988 06:00:22 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:14.246 BaseBdev1 00:08:14.246 06:00:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:14.246 06:00:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:14.246 06:00:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:14.246 06:00:22 -- common/autotest_common.sh@889 -- # local i 00:08:14.246 06:00:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:14.246 06:00:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:14.246 06:00:22 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:14.505 06:00:22 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:14.505 [ 00:08:14.505 { 00:08:14.505 "name": "BaseBdev1", 00:08:14.505 "aliases": [ 00:08:14.505 "15ac44ea-10ee-11ef-ba60-3508ead7bdda" 00:08:14.505 ], 00:08:14.505 "product_name": "Malloc disk", 00:08:14.505 "block_size": 512, 00:08:14.505 "num_blocks": 65536, 00:08:14.505 "uuid": "15ac44ea-10ee-11ef-ba60-3508ead7bdda", 00:08:14.505 "assigned_rate_limits": { 00:08:14.505 "rw_ios_per_sec": 0, 00:08:14.505 "rw_mbytes_per_sec": 0, 00:08:14.505 "r_mbytes_per_sec": 0, 00:08:14.505 "w_mbytes_per_sec": 0 00:08:14.505 }, 00:08:14.505 "claimed": false, 00:08:14.505 "zoned": false, 00:08:14.505 "supported_io_types": { 00:08:14.505 "read": true, 00:08:14.505 "write": true, 00:08:14.505 "unmap": true, 00:08:14.505 "write_zeroes": true, 00:08:14.505 "flush": true, 00:08:14.505 "reset": true, 00:08:14.505 "compare": false, 00:08:14.505 "compare_and_write": false, 00:08:14.505 "abort": true, 00:08:14.505 "nvme_admin": false, 00:08:14.505 "nvme_io": false 00:08:14.505 }, 00:08:14.505 "memory_domains": [ 00:08:14.505 { 00:08:14.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.505 "dma_device_type": 2 00:08:14.505 } 00:08:14.505 ], 00:08:14.505 "driver_specific": {} 00:08:14.505 } 00:08:14.505 ] 00:08:14.505 06:00:22 -- common/autotest_common.sh@895 -- # return 0 00:08:14.505 06:00:22 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:14.764 [2024-05-13 06:00:22.916949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.764 [2024-05-13 06:00:22.917445] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.764 [2024-05-13 06:00:22.917485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.764 [2024-05-13 06:00:22.917489] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.764 [2024-05-13 06:00:22.917496] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:14.764 06:00:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.765 06:00:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.024 06:00:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:15.024 "name": "Existed_Raid", 00:08:15.024 "uuid": "1600890e-10ee-11ef-ba60-3508ead7bdda", 00:08:15.024 "strip_size_kb": 64, 00:08:15.024 "state": "configuring", 00:08:15.024 "raid_level": "raid0", 00:08:15.024 "superblock": true, 00:08:15.024 "num_base_bdevs": 3, 00:08:15.024 "num_base_bdevs_discovered": 1, 00:08:15.024 "num_base_bdevs_operational": 3, 00:08:15.024 "base_bdevs_list": [ 00:08:15.024 { 00:08:15.024 "name": "BaseBdev1", 00:08:15.024 "uuid": "15ac44ea-10ee-11ef-ba60-3508ead7bdda", 00:08:15.024 "is_configured": true, 00:08:15.024 "data_offset": 2048, 00:08:15.024 "data_size": 63488 00:08:15.024 }, 00:08:15.024 { 00:08:15.024 "name": "BaseBdev2", 00:08:15.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.024 "is_configured": false, 00:08:15.024 "data_offset": 0, 00:08:15.024 "data_size": 0 00:08:15.024 }, 00:08:15.024 { 00:08:15.024 "name": "BaseBdev3", 00:08:15.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.024 "is_configured": false, 00:08:15.024 "data_offset": 0, 00:08:15.024 "data_size": 0 00:08:15.024 } 00:08:15.024 ] 00:08:15.024 }' 00:08:15.024 06:00:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:15.024 06:00:23 -- common/autotest_common.sh@10 -- # set +x 00:08:15.283 06:00:23 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.283 [2024-05-13 06:00:23.513375] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.283 BaseBdev2 00:08:15.283 06:00:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:15.283 06:00:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:15.283 06:00:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:15.283 06:00:23 -- common/autotest_common.sh@889 -- # local i 00:08:15.283 06:00:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:15.283 06:00:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:15.283 06:00:23 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:15.544 06:00:23 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.804 [ 00:08:15.804 { 00:08:15.804 "name": "BaseBdev2", 00:08:15.804 "aliases": [ 00:08:15.804 "165b85b2-10ee-11ef-ba60-3508ead7bdda" 00:08:15.804 ], 00:08:15.804 "product_name": "Malloc disk", 00:08:15.804 "block_size": 512, 00:08:15.804 "num_blocks": 65536, 00:08:15.804 "uuid": "165b85b2-10ee-11ef-ba60-3508ead7bdda", 00:08:15.804 "assigned_rate_limits": { 00:08:15.804 "rw_ios_per_sec": 0, 00:08:15.804 "rw_mbytes_per_sec": 0, 00:08:15.804 "r_mbytes_per_sec": 0, 00:08:15.804 "w_mbytes_per_sec": 0 00:08:15.804 }, 00:08:15.804 "claimed": true, 00:08:15.804 "claim_type": "exclusive_write", 00:08:15.804 "zoned": false, 00:08:15.804 "supported_io_types": { 00:08:15.804 "read": true, 00:08:15.804 "write": true, 00:08:15.804 "unmap": true, 00:08:15.804 "write_zeroes": true, 00:08:15.804 "flush": true, 00:08:15.804 "reset": true, 00:08:15.804 "compare": false, 00:08:15.804 "compare_and_write": false, 00:08:15.804 "abort": true, 00:08:15.804 "nvme_admin": false, 00:08:15.804 "nvme_io": false 00:08:15.804 }, 00:08:15.804 "memory_domains": [ 00:08:15.804 { 00:08:15.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.804 "dma_device_type": 2 00:08:15.804 } 00:08:15.804 ], 00:08:15.804 "driver_specific": {} 00:08:15.804 } 00:08:15.804 ] 00:08:15.804 06:00:23 -- common/autotest_common.sh@895 -- # return 0 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.804 06:00:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.804 06:00:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:15.804 "name": "Existed_Raid", 00:08:15.804 "uuid": "1600890e-10ee-11ef-ba60-3508ead7bdda", 00:08:15.804 "strip_size_kb": 64, 00:08:15.804 "state": "configuring", 00:08:15.804 "raid_level": "raid0", 00:08:15.804 "superblock": true, 00:08:15.804 "num_base_bdevs": 3, 00:08:15.804 "num_base_bdevs_discovered": 2, 00:08:15.804 "num_base_bdevs_operational": 3, 00:08:15.804 "base_bdevs_list": [ 00:08:15.804 { 00:08:15.804 "name": "BaseBdev1", 00:08:15.804 "uuid": "15ac44ea-10ee-11ef-ba60-3508ead7bdda", 00:08:15.804 "is_configured": true, 00:08:15.804 "data_offset": 2048, 00:08:15.804 "data_size": 63488 00:08:15.804 }, 00:08:15.804 { 00:08:15.804 "name": "BaseBdev2", 00:08:15.804 "uuid": "165b85b2-10ee-11ef-ba60-3508ead7bdda", 00:08:15.804 "is_configured": true, 00:08:15.804 "data_offset": 2048, 00:08:15.804 "data_size": 63488 00:08:15.804 }, 00:08:15.804 { 00:08:15.804 "name": "BaseBdev3", 00:08:15.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.804 "is_configured": false, 00:08:15.804 "data_offset": 0, 00:08:15.804 "data_size": 0 00:08:15.804 } 00:08:15.804 ] 00:08:15.804 }' 00:08:15.804 06:00:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:15.804 06:00:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.066 06:00:24 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.329 [2024-05-13 06:00:24.489796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.329 [2024-05-13 06:00:24.489868] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b03aa00 00:08:16.329 [2024-05-13 06:00:24.489872] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:16.329 [2024-05-13 06:00:24.489889] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b09dec0 00:08:16.329 [2024-05-13 06:00:24.489928] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b03aa00 00:08:16.329 [2024-05-13 06:00:24.489931] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b03aa00 00:08:16.329 [2024-05-13 06:00:24.489945] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.329 BaseBdev3 00:08:16.329 06:00:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:16.329 06:00:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:16.329 06:00:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:16.329 06:00:24 -- common/autotest_common.sh@889 -- # local i 00:08:16.329 06:00:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:16.329 06:00:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:16.329 06:00:24 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:16.590 06:00:24 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.590 [ 00:08:16.590 { 00:08:16.590 "name": "BaseBdev3", 00:08:16.590 "aliases": [ 00:08:16.590 "16f08365-10ee-11ef-ba60-3508ead7bdda" 00:08:16.590 ], 00:08:16.590 "product_name": "Malloc disk", 00:08:16.590 "block_size": 512, 00:08:16.590 "num_blocks": 65536, 00:08:16.590 "uuid": "16f08365-10ee-11ef-ba60-3508ead7bdda", 00:08:16.590 "assigned_rate_limits": { 00:08:16.590 "rw_ios_per_sec": 0, 00:08:16.590 "rw_mbytes_per_sec": 0, 00:08:16.590 "r_mbytes_per_sec": 0, 00:08:16.590 "w_mbytes_per_sec": 0 00:08:16.590 }, 00:08:16.590 "claimed": true, 00:08:16.590 "claim_type": "exclusive_write", 00:08:16.590 "zoned": false, 00:08:16.590 "supported_io_types": { 00:08:16.590 "read": true, 00:08:16.590 "write": true, 00:08:16.590 "unmap": true, 00:08:16.590 "write_zeroes": true, 00:08:16.590 "flush": true, 00:08:16.590 "reset": true, 00:08:16.590 "compare": false, 00:08:16.590 "compare_and_write": false, 00:08:16.590 "abort": true, 00:08:16.590 "nvme_admin": false, 00:08:16.590 "nvme_io": false 00:08:16.590 }, 00:08:16.590 "memory_domains": [ 00:08:16.590 { 00:08:16.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.590 "dma_device_type": 2 00:08:16.590 } 00:08:16.590 ], 00:08:16.590 "driver_specific": {} 00:08:16.590 } 00:08:16.590 ] 00:08:16.590 06:00:24 -- common/autotest_common.sh@895 -- # return 0 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:16.590 06:00:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.849 06:00:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:16.849 "name": "Existed_Raid", 00:08:16.849 "uuid": "1600890e-10ee-11ef-ba60-3508ead7bdda", 00:08:16.849 "strip_size_kb": 64, 00:08:16.849 "state": "online", 00:08:16.849 "raid_level": "raid0", 00:08:16.849 "superblock": true, 00:08:16.849 "num_base_bdevs": 3, 00:08:16.849 "num_base_bdevs_discovered": 3, 00:08:16.849 "num_base_bdevs_operational": 3, 00:08:16.849 "base_bdevs_list": [ 00:08:16.849 { 00:08:16.849 "name": "BaseBdev1", 00:08:16.849 "uuid": "15ac44ea-10ee-11ef-ba60-3508ead7bdda", 00:08:16.849 "is_configured": true, 00:08:16.849 "data_offset": 2048, 00:08:16.849 "data_size": 63488 00:08:16.849 }, 00:08:16.849 { 00:08:16.849 "name": "BaseBdev2", 00:08:16.849 "uuid": "165b85b2-10ee-11ef-ba60-3508ead7bdda", 00:08:16.849 "is_configured": true, 00:08:16.849 "data_offset": 2048, 00:08:16.849 "data_size": 63488 00:08:16.849 }, 00:08:16.849 { 00:08:16.849 "name": "BaseBdev3", 00:08:16.849 "uuid": "16f08365-10ee-11ef-ba60-3508ead7bdda", 00:08:16.849 "is_configured": true, 00:08:16.849 "data_offset": 2048, 00:08:16.849 "data_size": 63488 00:08:16.849 } 00:08:16.849 ] 00:08:16.849 }' 00:08:16.849 06:00:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:16.849 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.107 06:00:25 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:17.365 [2024-05-13 06:00:25.470083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.365 [2024-05-13 06:00:25.470108] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.365 [2024-05-13 06:00:25.470121] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.365 06:00:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.623 06:00:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:17.623 "name": "Existed_Raid", 00:08:17.623 "uuid": "1600890e-10ee-11ef-ba60-3508ead7bdda", 00:08:17.623 "strip_size_kb": 64, 00:08:17.623 "state": "offline", 00:08:17.623 "raid_level": "raid0", 00:08:17.623 "superblock": true, 00:08:17.623 "num_base_bdevs": 3, 00:08:17.623 "num_base_bdevs_discovered": 2, 00:08:17.623 "num_base_bdevs_operational": 2, 00:08:17.623 "base_bdevs_list": [ 00:08:17.623 { 00:08:17.623 "name": null, 00:08:17.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.623 "is_configured": false, 00:08:17.623 "data_offset": 2048, 00:08:17.623 "data_size": 63488 00:08:17.623 }, 00:08:17.623 { 00:08:17.623 "name": "BaseBdev2", 00:08:17.623 "uuid": "165b85b2-10ee-11ef-ba60-3508ead7bdda", 00:08:17.623 "is_configured": true, 00:08:17.623 "data_offset": 2048, 00:08:17.623 "data_size": 63488 00:08:17.623 }, 00:08:17.623 { 00:08:17.623 "name": "BaseBdev3", 00:08:17.623 "uuid": "16f08365-10ee-11ef-ba60-3508ead7bdda", 00:08:17.623 "is_configured": true, 00:08:17.623 "data_offset": 2048, 00:08:17.623 "data_size": 63488 00:08:17.623 } 00:08:17.623 ] 00:08:17.623 }' 00:08:17.623 06:00:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:17.623 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.623 06:00:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:17.623 06:00:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:17.623 06:00:25 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.623 06:00:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:17.880 06:00:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:17.880 06:00:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:17.880 06:00:26 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:18.137 [2024-05-13 06:00:26.287458] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.137 06:00:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:18.137 06:00:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:18.137 06:00:26 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.137 06:00:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:18.397 [2024-05-13 06:00:26.652590] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:18.397 [2024-05-13 06:00:26.652605] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b03aa00 name Existed_Raid, state offline 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.397 06:00:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.656 06:00:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:18.656 06:00:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:18.656 06:00:26 -- bdev/bdev_raid.sh@287 -- # killprocess 49518 00:08:18.656 06:00:26 -- common/autotest_common.sh@926 -- # '[' -z 49518 ']' 00:08:18.656 06:00:26 -- common/autotest_common.sh@930 -- # kill -0 49518 00:08:18.656 06:00:26 -- common/autotest_common.sh@931 -- # uname 00:08:18.656 06:00:26 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:18.656 06:00:26 -- common/autotest_common.sh@934 -- # tail -1 00:08:18.656 06:00:26 -- common/autotest_common.sh@934 -- # ps -c -o command 49518 00:08:18.657 06:00:26 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:18.657 06:00:26 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:18.657 killing process with pid 49518 00:08:18.657 06:00:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49518' 00:08:18.657 06:00:26 -- common/autotest_common.sh@945 -- # kill 49518 00:08:18.657 [2024-05-13 06:00:26.863936] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.657 06:00:26 -- common/autotest_common.sh@950 -- # wait 49518 00:08:18.657 [2024-05-13 06:00:26.863983] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:18.917 00:08:18.917 real 0m8.109s 00:08:18.917 user 0m13.824s 00:08:18.917 sys 0m1.622s 00:08:18.917 06:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.917 06:00:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.917 ************************************ 00:08:18.917 END TEST raid_state_function_test_sb 00:08:18.917 ************************************ 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:18.917 06:00:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.917 06:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.917 06:00:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.917 ************************************ 00:08:18.917 START TEST raid_superblock_test 00:08:18.917 ************************************ 00:08:18.917 06:00:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@357 -- # raid_pid=49754 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:18.917 06:00:27 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49754 /var/tmp/spdk-raid.sock 00:08:18.917 06:00:27 -- common/autotest_common.sh@819 -- # '[' -z 49754 ']' 00:08:18.917 06:00:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:18.917 06:00:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:18.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:18.917 06:00:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:18.917 06:00:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:18.917 06:00:27 -- common/autotest_common.sh@10 -- # set +x 00:08:18.917 [2024-05-13 06:00:27.145373] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:18.917 [2024-05-13 06:00:27.145610] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:19.484 EAL: TSC is not safe to use in SMP mode 00:08:19.484 EAL: TSC is not invariant 00:08:19.484 [2024-05-13 06:00:27.565567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.484 [2024-05-13 06:00:27.677440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.484 [2024-05-13 06:00:27.677872] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.484 [2024-05-13 06:00:27.677883] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.742 06:00:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:19.742 06:00:28 -- common/autotest_common.sh@852 -- # return 0 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:19.742 06:00:28 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:19.999 malloc1 00:08:19.999 06:00:28 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.257 [2024-05-13 06:00:28.363598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.257 [2024-05-13 06:00:28.363652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.257 [2024-05-13 06:00:28.364181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc79780 00:08:20.257 [2024-05-13 06:00:28.364203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.257 [2024-05-13 06:00:28.364915] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.257 [2024-05-13 06:00:28.364944] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.257 pt1 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.257 06:00:28 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:20.258 malloc2 00:08:20.258 06:00:28 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.515 [2024-05-13 06:00:28.703731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.515 [2024-05-13 06:00:28.703769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.515 [2024-05-13 06:00:28.703795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc79c80 00:08:20.515 [2024-05-13 06:00:28.703801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.515 [2024-05-13 06:00:28.704207] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.515 [2024-05-13 06:00:28.704232] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.515 pt2 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.515 06:00:28 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:20.772 malloc3 00:08:20.772 06:00:28 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:20.772 [2024-05-13 06:00:29.055870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:20.772 [2024-05-13 06:00:29.055923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.773 [2024-05-13 06:00:29.055951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc7a180 00:08:20.773 [2024-05-13 06:00:29.055957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.773 [2024-05-13 06:00:29.056629] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.773 [2024-05-13 06:00:29.056655] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:20.773 pt3 00:08:20.773 06:00:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:20.773 06:00:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:20.773 06:00:29 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:08:21.031 [2024-05-13 06:00:29.231940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.031 [2024-05-13 06:00:29.232368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.031 [2024-05-13 06:00:29.232385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:21.031 [2024-05-13 06:00:29.232445] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bc7a400 00:08:21.031 [2024-05-13 06:00:29.232451] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:21.031 [2024-05-13 06:00:29.232478] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bcdce20 00:08:21.031 [2024-05-13 06:00:29.232534] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bc7a400 00:08:21.031 [2024-05-13 06:00:29.232537] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bc7a400 00:08:21.031 [2024-05-13 06:00:29.232554] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.031 06:00:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.290 06:00:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:21.290 "name": "raid_bdev1", 00:08:21.290 "uuid": "19c4207c-10ee-11ef-ba60-3508ead7bdda", 00:08:21.290 "strip_size_kb": 64, 00:08:21.290 "state": "online", 00:08:21.290 "raid_level": "raid0", 00:08:21.290 "superblock": true, 00:08:21.290 "num_base_bdevs": 3, 00:08:21.290 "num_base_bdevs_discovered": 3, 00:08:21.290 "num_base_bdevs_operational": 3, 00:08:21.290 "base_bdevs_list": [ 00:08:21.290 { 00:08:21.290 "name": "pt1", 00:08:21.290 "uuid": "86ea1c78-f5d4-0256-bdf0-4822d699b213", 00:08:21.290 "is_configured": true, 00:08:21.290 "data_offset": 2048, 00:08:21.290 "data_size": 63488 00:08:21.290 }, 00:08:21.290 { 00:08:21.290 "name": "pt2", 00:08:21.290 "uuid": "203c3358-54c7-1258-a531-7655c3362e1c", 00:08:21.290 "is_configured": true, 00:08:21.290 "data_offset": 2048, 00:08:21.290 "data_size": 63488 00:08:21.290 }, 00:08:21.290 { 00:08:21.290 "name": "pt3", 00:08:21.290 "uuid": "f1fc5cb6-aaef-e85f-b7ab-76758165fe1a", 00:08:21.290 "is_configured": true, 00:08:21.290 "data_offset": 2048, 00:08:21.290 "data_size": 63488 00:08:21.290 } 00:08:21.290 ] 00:08:21.290 }' 00:08:21.290 06:00:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:21.291 06:00:29 -- common/autotest_common.sh@10 -- # set +x 00:08:21.550 06:00:29 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:21.550 06:00:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:21.809 [2024-05-13 06:00:29.872255] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.809 06:00:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=19c4207c-10ee-11ef-ba60-3508ead7bdda 00:08:21.809 06:00:29 -- bdev/bdev_raid.sh@380 -- # '[' -z 19c4207c-10ee-11ef-ba60-3508ead7bdda ']' 00:08:21.809 06:00:29 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:21.809 [2024-05-13 06:00:30.060282] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.809 [2024-05-13 06:00:30.060294] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.809 [2024-05-13 06:00:30.060332] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.809 [2024-05-13 06:00:30.060344] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.809 [2024-05-13 06:00:30.060347] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc7a400 name raid_bdev1, state offline 00:08:21.809 06:00:30 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.809 06:00:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:22.068 06:00:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:22.068 06:00:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:22.068 06:00:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.068 06:00:30 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:22.325 06:00:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.325 06:00:30 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:22.325 06:00:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.325 06:00:30 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:22.581 06:00:30 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:22.581 06:00:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:22.839 06:00:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:22.839 06:00:30 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:22.839 06:00:30 -- common/autotest_common.sh@640 -- # local es=0 00:08:22.839 06:00:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:22.839 06:00:30 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.839 06:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:22.839 06:00:30 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.839 06:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:22.839 06:00:30 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.839 06:00:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:22.839 06:00:30 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.839 06:00:30 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:22.839 06:00:30 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:22.839 [2024-05-13 06:00:31.140696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:22.839 [2024-05-13 06:00:31.141435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:22.839 [2024-05-13 06:00:31.141453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:22.839 [2024-05-13 06:00:31.141466] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:22.839 [2024-05-13 06:00:31.141499] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:22.839 [2024-05-13 06:00:31.141508] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:08:22.839 [2024-05-13 06:00:31.141515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.839 [2024-05-13 06:00:31.141519] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc7a180 name raid_bdev1, state configuring 00:08:23.118 request: 00:08:23.118 { 00:08:23.118 "name": "raid_bdev1", 00:08:23.118 "raid_level": "raid0", 00:08:23.118 "base_bdevs": [ 00:08:23.118 "malloc1", 00:08:23.118 "malloc2", 00:08:23.118 "malloc3" 00:08:23.118 ], 00:08:23.118 "superblock": false, 00:08:23.118 "strip_size_kb": 64, 00:08:23.118 "method": "bdev_raid_create", 00:08:23.118 "req_id": 1 00:08:23.118 } 00:08:23.118 Got JSON-RPC error response 00:08:23.118 response: 00:08:23.118 { 00:08:23.118 "code": -17, 00:08:23.118 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:23.118 } 00:08:23.118 06:00:31 -- common/autotest_common.sh@643 -- # es=1 00:08:23.118 06:00:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:23.118 06:00:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:23.118 06:00:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:23.118 06:00:31 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.118 06:00:31 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:23.118 06:00:31 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:23.118 06:00:31 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:23.118 06:00:31 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.382 [2024-05-13 06:00:31.496815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.382 [2024-05-13 06:00:31.496854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.382 [2024-05-13 06:00:31.496884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc79c80 00:08:23.382 [2024-05-13 06:00:31.496890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.382 [2024-05-13 06:00:31.497560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.382 [2024-05-13 06:00:31.497595] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.382 [2024-05-13 06:00:31.497613] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:23.382 [2024-05-13 06:00:31.497622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.382 pt1 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.382 06:00:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.639 06:00:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:23.639 "name": "raid_bdev1", 00:08:23.639 "uuid": "19c4207c-10ee-11ef-ba60-3508ead7bdda", 00:08:23.639 "strip_size_kb": 64, 00:08:23.639 "state": "configuring", 00:08:23.639 "raid_level": "raid0", 00:08:23.639 "superblock": true, 00:08:23.639 "num_base_bdevs": 3, 00:08:23.639 "num_base_bdevs_discovered": 1, 00:08:23.639 "num_base_bdevs_operational": 3, 00:08:23.639 "base_bdevs_list": [ 00:08:23.639 { 00:08:23.639 "name": "pt1", 00:08:23.639 "uuid": "86ea1c78-f5d4-0256-bdf0-4822d699b213", 00:08:23.639 "is_configured": true, 00:08:23.639 "data_offset": 2048, 00:08:23.639 "data_size": 63488 00:08:23.639 }, 00:08:23.639 { 00:08:23.639 "name": null, 00:08:23.639 "uuid": "203c3358-54c7-1258-a531-7655c3362e1c", 00:08:23.639 "is_configured": false, 00:08:23.639 "data_offset": 2048, 00:08:23.639 "data_size": 63488 00:08:23.639 }, 00:08:23.639 { 00:08:23.639 "name": null, 00:08:23.640 "uuid": "f1fc5cb6-aaef-e85f-b7ab-76758165fe1a", 00:08:23.640 "is_configured": false, 00:08:23.640 "data_offset": 2048, 00:08:23.640 "data_size": 63488 00:08:23.640 } 00:08:23.640 ] 00:08:23.640 }' 00:08:23.640 06:00:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:23.640 06:00:31 -- common/autotest_common.sh@10 -- # set +x 00:08:23.640 06:00:31 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:08:23.640 06:00:31 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.898 [2024-05-13 06:00:32.093034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.898 [2024-05-13 06:00:32.093086] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.898 [2024-05-13 06:00:32.093115] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc7a680 00:08:23.898 [2024-05-13 06:00:32.093121] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.898 [2024-05-13 06:00:32.093240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.898 [2024-05-13 06:00:32.093246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.899 [2024-05-13 06:00:32.093264] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:23.899 [2024-05-13 06:00:32.093270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.899 pt2 00:08:23.899 06:00:32 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:24.157 [2024-05-13 06:00:32.245084] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:24.157 "name": "raid_bdev1", 00:08:24.157 "uuid": "19c4207c-10ee-11ef-ba60-3508ead7bdda", 00:08:24.157 "strip_size_kb": 64, 00:08:24.157 "state": "configuring", 00:08:24.157 "raid_level": "raid0", 00:08:24.157 "superblock": true, 00:08:24.157 "num_base_bdevs": 3, 00:08:24.157 "num_base_bdevs_discovered": 1, 00:08:24.157 "num_base_bdevs_operational": 3, 00:08:24.157 "base_bdevs_list": [ 00:08:24.157 { 00:08:24.157 "name": "pt1", 00:08:24.157 "uuid": "86ea1c78-f5d4-0256-bdf0-4822d699b213", 00:08:24.157 "is_configured": true, 00:08:24.157 "data_offset": 2048, 00:08:24.157 "data_size": 63488 00:08:24.157 }, 00:08:24.157 { 00:08:24.157 "name": null, 00:08:24.157 "uuid": "203c3358-54c7-1258-a531-7655c3362e1c", 00:08:24.157 "is_configured": false, 00:08:24.157 "data_offset": 2048, 00:08:24.157 "data_size": 63488 00:08:24.157 }, 00:08:24.157 { 00:08:24.157 "name": null, 00:08:24.157 "uuid": "f1fc5cb6-aaef-e85f-b7ab-76758165fe1a", 00:08:24.157 "is_configured": false, 00:08:24.157 "data_offset": 2048, 00:08:24.157 "data_size": 63488 00:08:24.157 } 00:08:24.157 ] 00:08:24.157 }' 00:08:24.157 06:00:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:24.157 06:00:32 -- common/autotest_common.sh@10 -- # set +x 00:08:24.417 06:00:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:24.417 06:00:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:24.417 06:00:32 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.675 [2024-05-13 06:00:32.873305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.675 [2024-05-13 06:00:32.873354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.675 [2024-05-13 06:00:32.873383] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc7a680 00:08:24.675 [2024-05-13 06:00:32.873389] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.675 [2024-05-13 06:00:32.873506] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.675 [2024-05-13 06:00:32.873513] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.675 [2024-05-13 06:00:32.873529] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:24.675 [2024-05-13 06:00:32.873534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.675 pt2 00:08:24.675 06:00:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:24.675 06:00:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:24.675 06:00:32 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:24.933 [2024-05-13 06:00:33.065370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:24.933 [2024-05-13 06:00:33.065399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.933 [2024-05-13 06:00:33.065419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82bc7a400 00:08:24.933 [2024-05-13 06:00:33.065425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.933 [2024-05-13 06:00:33.065511] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.933 [2024-05-13 06:00:33.065517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:24.933 [2024-05-13 06:00:33.065532] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:24.933 [2024-05-13 06:00:33.065537] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:24.933 [2024-05-13 06:00:33.065558] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bc79780 00:08:24.933 [2024-05-13 06:00:33.065561] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.933 [2024-05-13 06:00:33.065578] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bcdce20 00:08:24.933 [2024-05-13 06:00:33.065621] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bc79780 00:08:24.933 [2024-05-13 06:00:33.065624] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82bc79780 00:08:24.933 [2024-05-13 06:00:33.065639] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.933 pt3 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.933 06:00:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.191 06:00:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:25.191 "name": "raid_bdev1", 00:08:25.191 "uuid": "19c4207c-10ee-11ef-ba60-3508ead7bdda", 00:08:25.191 "strip_size_kb": 64, 00:08:25.191 "state": "online", 00:08:25.191 "raid_level": "raid0", 00:08:25.191 "superblock": true, 00:08:25.191 "num_base_bdevs": 3, 00:08:25.191 "num_base_bdevs_discovered": 3, 00:08:25.191 "num_base_bdevs_operational": 3, 00:08:25.191 "base_bdevs_list": [ 00:08:25.191 { 00:08:25.191 "name": "pt1", 00:08:25.191 "uuid": "86ea1c78-f5d4-0256-bdf0-4822d699b213", 00:08:25.191 "is_configured": true, 00:08:25.191 "data_offset": 2048, 00:08:25.191 "data_size": 63488 00:08:25.191 }, 00:08:25.191 { 00:08:25.191 "name": "pt2", 00:08:25.191 "uuid": "203c3358-54c7-1258-a531-7655c3362e1c", 00:08:25.191 "is_configured": true, 00:08:25.191 "data_offset": 2048, 00:08:25.191 "data_size": 63488 00:08:25.191 }, 00:08:25.191 { 00:08:25.191 "name": "pt3", 00:08:25.191 "uuid": "f1fc5cb6-aaef-e85f-b7ab-76758165fe1a", 00:08:25.191 "is_configured": true, 00:08:25.191 "data_offset": 2048, 00:08:25.191 "data_size": 63488 00:08:25.191 } 00:08:25.191 ] 00:08:25.191 }' 00:08:25.191 06:00:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:25.191 06:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:25.449 [2024-05-13 06:00:33.689601] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@430 -- # '[' 19c4207c-10ee-11ef-ba60-3508ead7bdda '!=' 19c4207c-10ee-11ef-ba60-3508ead7bdda ']' 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:25.449 06:00:33 -- bdev/bdev_raid.sh@511 -- # killprocess 49754 00:08:25.449 06:00:33 -- common/autotest_common.sh@926 -- # '[' -z 49754 ']' 00:08:25.449 06:00:33 -- common/autotest_common.sh@930 -- # kill -0 49754 00:08:25.449 06:00:33 -- common/autotest_common.sh@931 -- # uname 00:08:25.449 06:00:33 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:25.449 06:00:33 -- common/autotest_common.sh@934 -- # ps -c -o command 49754 00:08:25.449 06:00:33 -- common/autotest_common.sh@934 -- # tail -1 00:08:25.450 06:00:33 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:25.450 06:00:33 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:25.450 killing process with pid 49754 00:08:25.450 06:00:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49754' 00:08:25.450 06:00:33 -- common/autotest_common.sh@945 -- # kill 49754 00:08:25.450 [2024-05-13 06:00:33.721823] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.450 [2024-05-13 06:00:33.721852] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.450 [2024-05-13 06:00:33.721866] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.450 [2024-05-13 06:00:33.721870] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc79780 name raid_bdev1, state offline 00:08:25.450 06:00:33 -- common/autotest_common.sh@950 -- # wait 49754 00:08:25.450 [2024-05-13 06:00:33.741285] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:25.707 00:08:25.707 real 0m6.753s 00:08:25.707 user 0m11.446s 00:08:25.707 sys 0m1.371s 00:08:25.707 06:00:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.707 06:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.707 ************************************ 00:08:25.707 END TEST raid_superblock_test 00:08:25.707 ************************************ 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:25.707 06:00:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:25.707 06:00:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.707 06:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.707 ************************************ 00:08:25.707 START TEST raid_state_function_test 00:08:25.707 ************************************ 00:08:25.707 06:00:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=49935 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49935' 00:08:25.707 Process raid pid: 49935 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49935 /var/tmp/spdk-raid.sock 00:08:25.707 06:00:33 -- common/autotest_common.sh@819 -- # '[' -z 49935 ']' 00:08:25.707 06:00:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:25.707 06:00:33 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:25.707 06:00:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:25.707 06:00:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:25.707 06:00:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.707 06:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.707 [2024-05-13 06:00:33.951892] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:25.707 [2024-05-13 06:00:33.952138] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:26.273 EAL: TSC is not safe to use in SMP mode 00:08:26.273 EAL: TSC is not invariant 00:08:26.273 [2024-05-13 06:00:34.384036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.273 [2024-05-13 06:00:34.471367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.273 [2024-05-13 06:00:34.471791] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.273 [2024-05-13 06:00:34.471803] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.839 06:00:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:26.839 06:00:34 -- common/autotest_common.sh@852 -- # return 0 00:08:26.839 06:00:34 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:26.839 [2024-05-13 06:00:35.014977] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.839 [2024-05-13 06:00:35.015027] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.839 [2024-05-13 06:00:35.015032] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.839 [2024-05-13 06:00:35.015039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.839 [2024-05-13 06:00:35.015042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.839 [2024-05-13 06:00:35.015048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:26.839 06:00:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.103 06:00:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:27.103 "name": "Existed_Raid", 00:08:27.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.103 "strip_size_kb": 64, 00:08:27.103 "state": "configuring", 00:08:27.103 "raid_level": "concat", 00:08:27.103 "superblock": false, 00:08:27.103 "num_base_bdevs": 3, 00:08:27.103 "num_base_bdevs_discovered": 0, 00:08:27.103 "num_base_bdevs_operational": 3, 00:08:27.103 "base_bdevs_list": [ 00:08:27.103 { 00:08:27.103 "name": "BaseBdev1", 00:08:27.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.103 "is_configured": false, 00:08:27.103 "data_offset": 0, 00:08:27.103 "data_size": 0 00:08:27.103 }, 00:08:27.103 { 00:08:27.103 "name": "BaseBdev2", 00:08:27.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.103 "is_configured": false, 00:08:27.103 "data_offset": 0, 00:08:27.103 "data_size": 0 00:08:27.103 }, 00:08:27.103 { 00:08:27.103 "name": "BaseBdev3", 00:08:27.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.103 "is_configured": false, 00:08:27.103 "data_offset": 0, 00:08:27.103 "data_size": 0 00:08:27.103 } 00:08:27.103 ] 00:08:27.103 }' 00:08:27.103 06:00:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:27.103 06:00:35 -- common/autotest_common.sh@10 -- # set +x 00:08:27.365 06:00:35 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:27.365 [2024-05-13 06:00:35.667163] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.365 [2024-05-13 06:00:35.667186] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829682500 name Existed_Raid, state configuring 00:08:27.623 06:00:35 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:27.623 [2024-05-13 06:00:35.827233] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.623 [2024-05-13 06:00:35.827269] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.623 [2024-05-13 06:00:35.827273] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.623 [2024-05-13 06:00:35.827278] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.623 [2024-05-13 06:00:35.827281] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.623 [2024-05-13 06:00:35.827286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.623 06:00:35 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.881 [2024-05-13 06:00:36.012088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.881 BaseBdev1 00:08:27.881 06:00:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:27.881 06:00:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:27.881 06:00:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:27.881 06:00:36 -- common/autotest_common.sh@889 -- # local i 00:08:27.881 06:00:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:27.881 06:00:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:27.881 06:00:36 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:28.140 06:00:36 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:28.140 [ 00:08:28.140 { 00:08:28.140 "name": "BaseBdev1", 00:08:28.140 "aliases": [ 00:08:28.140 "1dce9396-10ee-11ef-ba60-3508ead7bdda" 00:08:28.140 ], 00:08:28.140 "product_name": "Malloc disk", 00:08:28.140 "block_size": 512, 00:08:28.140 "num_blocks": 65536, 00:08:28.140 "uuid": "1dce9396-10ee-11ef-ba60-3508ead7bdda", 00:08:28.140 "assigned_rate_limits": { 00:08:28.140 "rw_ios_per_sec": 0, 00:08:28.140 "rw_mbytes_per_sec": 0, 00:08:28.140 "r_mbytes_per_sec": 0, 00:08:28.140 "w_mbytes_per_sec": 0 00:08:28.140 }, 00:08:28.140 "claimed": true, 00:08:28.140 "claim_type": "exclusive_write", 00:08:28.140 "zoned": false, 00:08:28.140 "supported_io_types": { 00:08:28.140 "read": true, 00:08:28.140 "write": true, 00:08:28.140 "unmap": true, 00:08:28.140 "write_zeroes": true, 00:08:28.140 "flush": true, 00:08:28.140 "reset": true, 00:08:28.140 "compare": false, 00:08:28.140 "compare_and_write": false, 00:08:28.140 "abort": true, 00:08:28.140 "nvme_admin": false, 00:08:28.140 "nvme_io": false 00:08:28.140 }, 00:08:28.140 "memory_domains": [ 00:08:28.140 { 00:08:28.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.140 "dma_device_type": 2 00:08:28.140 } 00:08:28.140 ], 00:08:28.140 "driver_specific": {} 00:08:28.140 } 00:08:28.140 ] 00:08:28.140 06:00:36 -- common/autotest_common.sh@895 -- # return 0 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.140 06:00:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.399 06:00:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:28.399 "name": "Existed_Raid", 00:08:28.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.399 "strip_size_kb": 64, 00:08:28.399 "state": "configuring", 00:08:28.399 "raid_level": "concat", 00:08:28.399 "superblock": false, 00:08:28.399 "num_base_bdevs": 3, 00:08:28.399 "num_base_bdevs_discovered": 1, 00:08:28.399 "num_base_bdevs_operational": 3, 00:08:28.399 "base_bdevs_list": [ 00:08:28.399 { 00:08:28.399 "name": "BaseBdev1", 00:08:28.399 "uuid": "1dce9396-10ee-11ef-ba60-3508ead7bdda", 00:08:28.399 "is_configured": true, 00:08:28.399 "data_offset": 0, 00:08:28.399 "data_size": 65536 00:08:28.399 }, 00:08:28.399 { 00:08:28.399 "name": "BaseBdev2", 00:08:28.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.399 "is_configured": false, 00:08:28.399 "data_offset": 0, 00:08:28.399 "data_size": 0 00:08:28.399 }, 00:08:28.399 { 00:08:28.399 "name": "BaseBdev3", 00:08:28.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.399 "is_configured": false, 00:08:28.399 "data_offset": 0, 00:08:28.399 "data_size": 0 00:08:28.399 } 00:08:28.399 ] 00:08:28.399 }' 00:08:28.400 06:00:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:28.400 06:00:36 -- common/autotest_common.sh@10 -- # set +x 00:08:28.658 06:00:36 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:28.917 [2024-05-13 06:00:36.979605] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.917 [2024-05-13 06:00:36.979628] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829682500 name Existed_Raid, state configuring 00:08:28.917 06:00:36 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:28.917 06:00:36 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:28.917 [2024-05-13 06:00:37.179693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.917 [2024-05-13 06:00:37.180343] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.917 [2024-05-13 06:00:37.180391] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.917 [2024-05-13 06:00:37.180395] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:28.917 [2024-05-13 06:00:37.180403] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:28.917 06:00:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.175 06:00:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:29.175 "name": "Existed_Raid", 00:08:29.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.175 "strip_size_kb": 64, 00:08:29.175 "state": "configuring", 00:08:29.175 "raid_level": "concat", 00:08:29.175 "superblock": false, 00:08:29.175 "num_base_bdevs": 3, 00:08:29.175 "num_base_bdevs_discovered": 1, 00:08:29.175 "num_base_bdevs_operational": 3, 00:08:29.175 "base_bdevs_list": [ 00:08:29.175 { 00:08:29.175 "name": "BaseBdev1", 00:08:29.175 "uuid": "1dce9396-10ee-11ef-ba60-3508ead7bdda", 00:08:29.175 "is_configured": true, 00:08:29.175 "data_offset": 0, 00:08:29.175 "data_size": 65536 00:08:29.175 }, 00:08:29.175 { 00:08:29.175 "name": "BaseBdev2", 00:08:29.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.175 "is_configured": false, 00:08:29.175 "data_offset": 0, 00:08:29.175 "data_size": 0 00:08:29.175 }, 00:08:29.175 { 00:08:29.175 "name": "BaseBdev3", 00:08:29.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.175 "is_configured": false, 00:08:29.175 "data_offset": 0, 00:08:29.175 "data_size": 0 00:08:29.175 } 00:08:29.175 ] 00:08:29.175 }' 00:08:29.175 06:00:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:29.175 06:00:37 -- common/autotest_common.sh@10 -- # set +x 00:08:29.434 06:00:37 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.693 [2024-05-13 06:00:37.815976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.693 BaseBdev2 00:08:29.693 06:00:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:29.693 06:00:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:29.693 06:00:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:29.693 06:00:37 -- common/autotest_common.sh@889 -- # local i 00:08:29.693 06:00:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:29.693 06:00:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:29.693 06:00:37 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:29.952 06:00:38 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.952 [ 00:08:29.952 { 00:08:29.952 "name": "BaseBdev2", 00:08:29.952 "aliases": [ 00:08:29.952 "1ee1eef2-10ee-11ef-ba60-3508ead7bdda" 00:08:29.952 ], 00:08:29.952 "product_name": "Malloc disk", 00:08:29.952 "block_size": 512, 00:08:29.952 "num_blocks": 65536, 00:08:29.952 "uuid": "1ee1eef2-10ee-11ef-ba60-3508ead7bdda", 00:08:29.952 "assigned_rate_limits": { 00:08:29.952 "rw_ios_per_sec": 0, 00:08:29.952 "rw_mbytes_per_sec": 0, 00:08:29.952 "r_mbytes_per_sec": 0, 00:08:29.952 "w_mbytes_per_sec": 0 00:08:29.952 }, 00:08:29.952 "claimed": true, 00:08:29.952 "claim_type": "exclusive_write", 00:08:29.952 "zoned": false, 00:08:29.952 "supported_io_types": { 00:08:29.952 "read": true, 00:08:29.952 "write": true, 00:08:29.952 "unmap": true, 00:08:29.952 "write_zeroes": true, 00:08:29.952 "flush": true, 00:08:29.952 "reset": true, 00:08:29.952 "compare": false, 00:08:29.952 "compare_and_write": false, 00:08:29.952 "abort": true, 00:08:29.952 "nvme_admin": false, 00:08:29.952 "nvme_io": false 00:08:29.952 }, 00:08:29.952 "memory_domains": [ 00:08:29.952 { 00:08:29.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.952 "dma_device_type": 2 00:08:29.952 } 00:08:29.952 ], 00:08:29.952 "driver_specific": {} 00:08:29.952 } 00:08:29.952 ] 00:08:29.952 06:00:38 -- common/autotest_common.sh@895 -- # return 0 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.952 06:00:38 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.210 06:00:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:30.210 "name": "Existed_Raid", 00:08:30.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.210 "strip_size_kb": 64, 00:08:30.210 "state": "configuring", 00:08:30.211 "raid_level": "concat", 00:08:30.211 "superblock": false, 00:08:30.211 "num_base_bdevs": 3, 00:08:30.211 "num_base_bdevs_discovered": 2, 00:08:30.211 "num_base_bdevs_operational": 3, 00:08:30.211 "base_bdevs_list": [ 00:08:30.211 { 00:08:30.211 "name": "BaseBdev1", 00:08:30.211 "uuid": "1dce9396-10ee-11ef-ba60-3508ead7bdda", 00:08:30.211 "is_configured": true, 00:08:30.211 "data_offset": 0, 00:08:30.211 "data_size": 65536 00:08:30.211 }, 00:08:30.211 { 00:08:30.211 "name": "BaseBdev2", 00:08:30.211 "uuid": "1ee1eef2-10ee-11ef-ba60-3508ead7bdda", 00:08:30.211 "is_configured": true, 00:08:30.211 "data_offset": 0, 00:08:30.211 "data_size": 65536 00:08:30.211 }, 00:08:30.211 { 00:08:30.211 "name": "BaseBdev3", 00:08:30.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.211 "is_configured": false, 00:08:30.211 "data_offset": 0, 00:08:30.211 "data_size": 0 00:08:30.211 } 00:08:30.211 ] 00:08:30.211 }' 00:08:30.211 06:00:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:30.211 06:00:38 -- common/autotest_common.sh@10 -- # set +x 00:08:30.469 06:00:38 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:30.728 [2024-05-13 06:00:38.804296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.728 [2024-05-13 06:00:38.804316] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829682a00 00:08:30.728 [2024-05-13 06:00:38.804320] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:30.728 [2024-05-13 06:00:38.804335] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8296e5ec0 00:08:30.728 [2024-05-13 06:00:38.804411] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829682a00 00:08:30.728 [2024-05-13 06:00:38.804414] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x829682a00 00:08:30.728 [2024-05-13 06:00:38.804435] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.728 BaseBdev3 00:08:30.728 06:00:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:30.728 06:00:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:30.728 06:00:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:30.728 06:00:38 -- common/autotest_common.sh@889 -- # local i 00:08:30.728 06:00:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:30.728 06:00:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:30.728 06:00:38 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:30.728 06:00:38 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.986 [ 00:08:30.986 { 00:08:30.986 "name": "BaseBdev3", 00:08:30.986 "aliases": [ 00:08:30.986 "1f78be10-10ee-11ef-ba60-3508ead7bdda" 00:08:30.986 ], 00:08:30.986 "product_name": "Malloc disk", 00:08:30.986 "block_size": 512, 00:08:30.986 "num_blocks": 65536, 00:08:30.986 "uuid": "1f78be10-10ee-11ef-ba60-3508ead7bdda", 00:08:30.986 "assigned_rate_limits": { 00:08:30.986 "rw_ios_per_sec": 0, 00:08:30.986 "rw_mbytes_per_sec": 0, 00:08:30.986 "r_mbytes_per_sec": 0, 00:08:30.986 "w_mbytes_per_sec": 0 00:08:30.986 }, 00:08:30.986 "claimed": true, 00:08:30.986 "claim_type": "exclusive_write", 00:08:30.986 "zoned": false, 00:08:30.986 "supported_io_types": { 00:08:30.986 "read": true, 00:08:30.986 "write": true, 00:08:30.986 "unmap": true, 00:08:30.986 "write_zeroes": true, 00:08:30.986 "flush": true, 00:08:30.986 "reset": true, 00:08:30.986 "compare": false, 00:08:30.986 "compare_and_write": false, 00:08:30.986 "abort": true, 00:08:30.986 "nvme_admin": false, 00:08:30.986 "nvme_io": false 00:08:30.986 }, 00:08:30.986 "memory_domains": [ 00:08:30.986 { 00:08:30.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.986 "dma_device_type": 2 00:08:30.986 } 00:08:30.986 ], 00:08:30.986 "driver_specific": {} 00:08:30.986 } 00:08:30.986 ] 00:08:30.986 06:00:39 -- common/autotest_common.sh@895 -- # return 0 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.986 06:00:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.245 06:00:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:31.245 "name": "Existed_Raid", 00:08:31.245 "uuid": "1f78c1e8-10ee-11ef-ba60-3508ead7bdda", 00:08:31.245 "strip_size_kb": 64, 00:08:31.245 "state": "online", 00:08:31.245 "raid_level": "concat", 00:08:31.245 "superblock": false, 00:08:31.245 "num_base_bdevs": 3, 00:08:31.245 "num_base_bdevs_discovered": 3, 00:08:31.245 "num_base_bdevs_operational": 3, 00:08:31.245 "base_bdevs_list": [ 00:08:31.245 { 00:08:31.245 "name": "BaseBdev1", 00:08:31.245 "uuid": "1dce9396-10ee-11ef-ba60-3508ead7bdda", 00:08:31.245 "is_configured": true, 00:08:31.245 "data_offset": 0, 00:08:31.245 "data_size": 65536 00:08:31.245 }, 00:08:31.245 { 00:08:31.245 "name": "BaseBdev2", 00:08:31.245 "uuid": "1ee1eef2-10ee-11ef-ba60-3508ead7bdda", 00:08:31.245 "is_configured": true, 00:08:31.245 "data_offset": 0, 00:08:31.245 "data_size": 65536 00:08:31.245 }, 00:08:31.245 { 00:08:31.245 "name": "BaseBdev3", 00:08:31.245 "uuid": "1f78be10-10ee-11ef-ba60-3508ead7bdda", 00:08:31.245 "is_configured": true, 00:08:31.245 "data_offset": 0, 00:08:31.245 "data_size": 65536 00:08:31.245 } 00:08:31.245 ] 00:08:31.245 }' 00:08:31.245 06:00:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:31.245 06:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:31.504 [2024-05-13 06:00:39.756500] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.504 [2024-05-13 06:00:39.756519] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.504 [2024-05-13 06:00:39.756529] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.504 06:00:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.762 06:00:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:31.762 "name": "Existed_Raid", 00:08:31.762 "uuid": "1f78c1e8-10ee-11ef-ba60-3508ead7bdda", 00:08:31.762 "strip_size_kb": 64, 00:08:31.762 "state": "offline", 00:08:31.762 "raid_level": "concat", 00:08:31.762 "superblock": false, 00:08:31.762 "num_base_bdevs": 3, 00:08:31.762 "num_base_bdevs_discovered": 2, 00:08:31.762 "num_base_bdevs_operational": 2, 00:08:31.762 "base_bdevs_list": [ 00:08:31.762 { 00:08:31.762 "name": null, 00:08:31.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.762 "is_configured": false, 00:08:31.762 "data_offset": 0, 00:08:31.762 "data_size": 65536 00:08:31.762 }, 00:08:31.762 { 00:08:31.762 "name": "BaseBdev2", 00:08:31.762 "uuid": "1ee1eef2-10ee-11ef-ba60-3508ead7bdda", 00:08:31.762 "is_configured": true, 00:08:31.762 "data_offset": 0, 00:08:31.762 "data_size": 65536 00:08:31.762 }, 00:08:31.762 { 00:08:31.762 "name": "BaseBdev3", 00:08:31.762 "uuid": "1f78be10-10ee-11ef-ba60-3508ead7bdda", 00:08:31.762 "is_configured": true, 00:08:31.762 "data_offset": 0, 00:08:31.762 "data_size": 65536 00:08:31.762 } 00:08:31.762 ] 00:08:31.762 }' 00:08:31.762 06:00:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:31.762 06:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:32.020 06:00:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:32.020 06:00:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:32.020 06:00:40 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.020 06:00:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:32.278 [2024-05-13 06:00:40.541389] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.278 06:00:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:32.537 06:00:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:32.537 06:00:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:32.537 06:00:40 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:32.797 [2024-05-13 06:00:40.846253] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:32.797 [2024-05-13 06:00:40.846280] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829682a00 name Existed_Raid, state offline 00:08:32.797 06:00:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:32.797 06:00:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:32.797 06:00:40 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.797 06:00:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:32.797 06:00:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:32.797 06:00:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:32.797 06:00:41 -- bdev/bdev_raid.sh@287 -- # killprocess 49935 00:08:32.797 06:00:41 -- common/autotest_common.sh@926 -- # '[' -z 49935 ']' 00:08:32.797 06:00:41 -- common/autotest_common.sh@930 -- # kill -0 49935 00:08:32.797 06:00:41 -- common/autotest_common.sh@931 -- # uname 00:08:32.797 06:00:41 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:32.797 06:00:41 -- common/autotest_common.sh@934 -- # ps -c -o command 49935 00:08:32.797 06:00:41 -- common/autotest_common.sh@934 -- # tail -1 00:08:32.797 06:00:41 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:32.797 06:00:41 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:32.797 killing process with pid 49935 00:08:32.797 06:00:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49935' 00:08:32.797 06:00:41 -- common/autotest_common.sh@945 -- # kill 49935 00:08:32.797 [2024-05-13 06:00:41.051921] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.797 [2024-05-13 06:00:41.051953] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.797 06:00:41 -- common/autotest_common.sh@950 -- # wait 49935 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:33.056 00:08:33.056 real 0m7.253s 00:08:33.056 user 0m12.494s 00:08:33.056 sys 0m1.331s 00:08:33.056 06:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.056 06:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.056 ************************************ 00:08:33.056 END TEST raid_state_function_test 00:08:33.056 ************************************ 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:33.056 06:00:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:33.056 06:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.056 06:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.056 ************************************ 00:08:33.056 START TEST raid_state_function_test_sb 00:08:33.056 ************************************ 00:08:33.056 06:00:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=50168 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50168' 00:08:33.056 Process raid pid: 50168 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:33.056 06:00:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50168 /var/tmp/spdk-raid.sock 00:08:33.056 06:00:41 -- common/autotest_common.sh@819 -- # '[' -z 50168 ']' 00:08:33.056 06:00:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:33.056 06:00:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:33.056 06:00:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:33.056 06:00:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.056 06:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:33.056 [2024-05-13 06:00:41.264342] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:33.056 [2024-05-13 06:00:41.264685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:33.625 EAL: TSC is not safe to use in SMP mode 00:08:33.625 EAL: TSC is not invariant 00:08:33.625 [2024-05-13 06:00:41.684214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.625 [2024-05-13 06:00:41.770509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.625 [2024-05-13 06:00:41.770918] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.625 [2024-05-13 06:00:41.770929] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.884 06:00:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.884 06:00:42 -- common/autotest_common.sh@852 -- # return 0 00:08:33.884 06:00:42 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:34.142 [2024-05-13 06:00:42.302078] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.142 [2024-05-13 06:00:42.302126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.142 [2024-05-13 06:00:42.302130] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.142 [2024-05-13 06:00:42.302136] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.142 [2024-05-13 06:00:42.302139] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.142 [2024-05-13 06:00:42.302144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.142 06:00:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.401 06:00:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:34.401 "name": "Existed_Raid", 00:08:34.401 "uuid": "218e78bb-10ee-11ef-ba60-3508ead7bdda", 00:08:34.401 "strip_size_kb": 64, 00:08:34.401 "state": "configuring", 00:08:34.401 "raid_level": "concat", 00:08:34.401 "superblock": true, 00:08:34.401 "num_base_bdevs": 3, 00:08:34.401 "num_base_bdevs_discovered": 0, 00:08:34.401 "num_base_bdevs_operational": 3, 00:08:34.401 "base_bdevs_list": [ 00:08:34.401 { 00:08:34.401 "name": "BaseBdev1", 00:08:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.401 "is_configured": false, 00:08:34.401 "data_offset": 0, 00:08:34.401 "data_size": 0 00:08:34.401 }, 00:08:34.401 { 00:08:34.401 "name": "BaseBdev2", 00:08:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.401 "is_configured": false, 00:08:34.401 "data_offset": 0, 00:08:34.401 "data_size": 0 00:08:34.401 }, 00:08:34.401 { 00:08:34.401 "name": "BaseBdev3", 00:08:34.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.401 "is_configured": false, 00:08:34.401 "data_offset": 0, 00:08:34.401 "data_size": 0 00:08:34.401 } 00:08:34.401 ] 00:08:34.401 }' 00:08:34.401 06:00:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:34.401 06:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:34.659 06:00:42 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:34.659 [2024-05-13 06:00:42.898209] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.659 [2024-05-13 06:00:42.898227] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b54d500 name Existed_Raid, state configuring 00:08:34.659 06:00:42 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:34.916 [2024-05-13 06:00:43.046256] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.916 [2024-05-13 06:00:43.046288] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.916 [2024-05-13 06:00:43.046291] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.916 [2024-05-13 06:00:43.046297] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.916 [2024-05-13 06:00:43.046299] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.916 [2024-05-13 06:00:43.046305] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.916 06:00:43 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:34.916 [2024-05-13 06:00:43.219050] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.916 BaseBdev1 00:08:35.175 06:00:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:35.175 06:00:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:35.175 06:00:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:35.175 06:00:43 -- common/autotest_common.sh@889 -- # local i 00:08:35.175 06:00:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:35.175 06:00:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:35.175 06:00:43 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:35.175 06:00:43 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.432 [ 00:08:35.432 { 00:08:35.432 "name": "BaseBdev1", 00:08:35.432 "aliases": [ 00:08:35.432 "221a46f6-10ee-11ef-ba60-3508ead7bdda" 00:08:35.432 ], 00:08:35.432 "product_name": "Malloc disk", 00:08:35.432 "block_size": 512, 00:08:35.432 "num_blocks": 65536, 00:08:35.432 "uuid": "221a46f6-10ee-11ef-ba60-3508ead7bdda", 00:08:35.432 "assigned_rate_limits": { 00:08:35.432 "rw_ios_per_sec": 0, 00:08:35.432 "rw_mbytes_per_sec": 0, 00:08:35.432 "r_mbytes_per_sec": 0, 00:08:35.432 "w_mbytes_per_sec": 0 00:08:35.432 }, 00:08:35.432 "claimed": true, 00:08:35.432 "claim_type": "exclusive_write", 00:08:35.432 "zoned": false, 00:08:35.432 "supported_io_types": { 00:08:35.432 "read": true, 00:08:35.432 "write": true, 00:08:35.432 "unmap": true, 00:08:35.432 "write_zeroes": true, 00:08:35.433 "flush": true, 00:08:35.433 "reset": true, 00:08:35.433 "compare": false, 00:08:35.433 "compare_and_write": false, 00:08:35.433 "abort": true, 00:08:35.433 "nvme_admin": false, 00:08:35.433 "nvme_io": false 00:08:35.433 }, 00:08:35.433 "memory_domains": [ 00:08:35.433 { 00:08:35.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.433 "dma_device_type": 2 00:08:35.433 } 00:08:35.433 ], 00:08:35.433 "driver_specific": {} 00:08:35.433 } 00:08:35.433 ] 00:08:35.433 06:00:43 -- common/autotest_common.sh@895 -- # return 0 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.433 06:00:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.690 06:00:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:35.690 "name": "Existed_Raid", 00:08:35.690 "uuid": "2200065a-10ee-11ef-ba60-3508ead7bdda", 00:08:35.690 "strip_size_kb": 64, 00:08:35.690 "state": "configuring", 00:08:35.690 "raid_level": "concat", 00:08:35.690 "superblock": true, 00:08:35.690 "num_base_bdevs": 3, 00:08:35.690 "num_base_bdevs_discovered": 1, 00:08:35.690 "num_base_bdevs_operational": 3, 00:08:35.690 "base_bdevs_list": [ 00:08:35.690 { 00:08:35.690 "name": "BaseBdev1", 00:08:35.690 "uuid": "221a46f6-10ee-11ef-ba60-3508ead7bdda", 00:08:35.690 "is_configured": true, 00:08:35.690 "data_offset": 2048, 00:08:35.690 "data_size": 63488 00:08:35.690 }, 00:08:35.690 { 00:08:35.690 "name": "BaseBdev2", 00:08:35.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.690 "is_configured": false, 00:08:35.690 "data_offset": 0, 00:08:35.690 "data_size": 0 00:08:35.690 }, 00:08:35.690 { 00:08:35.690 "name": "BaseBdev3", 00:08:35.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.690 "is_configured": false, 00:08:35.690 "data_offset": 0, 00:08:35.690 "data_size": 0 00:08:35.690 } 00:08:35.690 ] 00:08:35.690 }' 00:08:35.690 06:00:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:35.690 06:00:43 -- common/autotest_common.sh@10 -- # set +x 00:08:35.948 06:00:43 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:35.948 [2024-05-13 06:00:44.150541] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.948 [2024-05-13 06:00:44.150561] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b54d500 name Existed_Raid, state configuring 00:08:35.948 06:00:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:35.948 06:00:44 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:36.206 06:00:44 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.206 BaseBdev1 00:08:36.206 06:00:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:36.206 06:00:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:36.206 06:00:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:36.206 06:00:44 -- common/autotest_common.sh@889 -- # local i 00:08:36.206 06:00:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:36.206 06:00:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:36.206 06:00:44 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:36.466 06:00:44 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.725 [ 00:08:36.725 { 00:08:36.725 "name": "BaseBdev1", 00:08:36.725 "aliases": [ 00:08:36.725 "22dd1da7-10ee-11ef-ba60-3508ead7bdda" 00:08:36.725 ], 00:08:36.725 "product_name": "Malloc disk", 00:08:36.725 "block_size": 512, 00:08:36.725 "num_blocks": 65536, 00:08:36.725 "uuid": "22dd1da7-10ee-11ef-ba60-3508ead7bdda", 00:08:36.725 "assigned_rate_limits": { 00:08:36.725 "rw_ios_per_sec": 0, 00:08:36.725 "rw_mbytes_per_sec": 0, 00:08:36.725 "r_mbytes_per_sec": 0, 00:08:36.725 "w_mbytes_per_sec": 0 00:08:36.725 }, 00:08:36.725 "claimed": false, 00:08:36.725 "zoned": false, 00:08:36.725 "supported_io_types": { 00:08:36.725 "read": true, 00:08:36.725 "write": true, 00:08:36.725 "unmap": true, 00:08:36.725 "write_zeroes": true, 00:08:36.725 "flush": true, 00:08:36.725 "reset": true, 00:08:36.725 "compare": false, 00:08:36.725 "compare_and_write": false, 00:08:36.725 "abort": true, 00:08:36.725 "nvme_admin": false, 00:08:36.725 "nvme_io": false 00:08:36.725 }, 00:08:36.725 "memory_domains": [ 00:08:36.725 { 00:08:36.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.725 "dma_device_type": 2 00:08:36.725 } 00:08:36.725 ], 00:08:36.725 "driver_specific": {} 00:08:36.725 } 00:08:36.725 ] 00:08:36.725 06:00:44 -- common/autotest_common.sh@895 -- # return 0 00:08:36.725 06:00:44 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:36.725 [2024-05-13 06:00:45.003342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.725 [2024-05-13 06:00:45.003745] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.725 [2024-05-13 06:00:45.003787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.725 [2024-05-13 06:00:45.003795] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.725 [2024-05-13 06:00:45.003801] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.725 06:00:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.988 06:00:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:36.988 "name": "Existed_Raid", 00:08:36.988 "uuid": "232aa6e8-10ee-11ef-ba60-3508ead7bdda", 00:08:36.988 "strip_size_kb": 64, 00:08:36.988 "state": "configuring", 00:08:36.988 "raid_level": "concat", 00:08:36.988 "superblock": true, 00:08:36.988 "num_base_bdevs": 3, 00:08:36.988 "num_base_bdevs_discovered": 1, 00:08:36.988 "num_base_bdevs_operational": 3, 00:08:36.988 "base_bdevs_list": [ 00:08:36.988 { 00:08:36.988 "name": "BaseBdev1", 00:08:36.988 "uuid": "22dd1da7-10ee-11ef-ba60-3508ead7bdda", 00:08:36.988 "is_configured": true, 00:08:36.988 "data_offset": 2048, 00:08:36.988 "data_size": 63488 00:08:36.988 }, 00:08:36.988 { 00:08:36.988 "name": "BaseBdev2", 00:08:36.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.988 "is_configured": false, 00:08:36.988 "data_offset": 0, 00:08:36.988 "data_size": 0 00:08:36.988 }, 00:08:36.988 { 00:08:36.988 "name": "BaseBdev3", 00:08:36.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.988 "is_configured": false, 00:08:36.988 "data_offset": 0, 00:08:36.988 "data_size": 0 00:08:36.988 } 00:08:36.988 ] 00:08:36.988 }' 00:08:36.988 06:00:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:36.988 06:00:45 -- common/autotest_common.sh@10 -- # set +x 00:08:37.246 06:00:45 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.504 [2024-05-13 06:00:45.595572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.504 BaseBdev2 00:08:37.504 06:00:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:37.504 06:00:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:37.504 06:00:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:37.504 06:00:45 -- common/autotest_common.sh@889 -- # local i 00:08:37.504 06:00:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:37.504 06:00:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:37.504 06:00:45 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:37.504 06:00:45 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.764 [ 00:08:37.764 { 00:08:37.764 "name": "BaseBdev2", 00:08:37.764 "aliases": [ 00:08:37.764 "238501b7-10ee-11ef-ba60-3508ead7bdda" 00:08:37.764 ], 00:08:37.764 "product_name": "Malloc disk", 00:08:37.764 "block_size": 512, 00:08:37.764 "num_blocks": 65536, 00:08:37.764 "uuid": "238501b7-10ee-11ef-ba60-3508ead7bdda", 00:08:37.764 "assigned_rate_limits": { 00:08:37.764 "rw_ios_per_sec": 0, 00:08:37.764 "rw_mbytes_per_sec": 0, 00:08:37.764 "r_mbytes_per_sec": 0, 00:08:37.764 "w_mbytes_per_sec": 0 00:08:37.764 }, 00:08:37.764 "claimed": true, 00:08:37.764 "claim_type": "exclusive_write", 00:08:37.764 "zoned": false, 00:08:37.764 "supported_io_types": { 00:08:37.764 "read": true, 00:08:37.764 "write": true, 00:08:37.764 "unmap": true, 00:08:37.764 "write_zeroes": true, 00:08:37.764 "flush": true, 00:08:37.764 "reset": true, 00:08:37.764 "compare": false, 00:08:37.764 "compare_and_write": false, 00:08:37.764 "abort": true, 00:08:37.764 "nvme_admin": false, 00:08:37.764 "nvme_io": false 00:08:37.764 }, 00:08:37.764 "memory_domains": [ 00:08:37.764 { 00:08:37.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.764 "dma_device_type": 2 00:08:37.764 } 00:08:37.764 ], 00:08:37.764 "driver_specific": {} 00:08:37.764 } 00:08:37.764 ] 00:08:37.764 06:00:45 -- common/autotest_common.sh@895 -- # return 0 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.764 06:00:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.024 06:00:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:38.024 "name": "Existed_Raid", 00:08:38.024 "uuid": "232aa6e8-10ee-11ef-ba60-3508ead7bdda", 00:08:38.024 "strip_size_kb": 64, 00:08:38.024 "state": "configuring", 00:08:38.024 "raid_level": "concat", 00:08:38.024 "superblock": true, 00:08:38.024 "num_base_bdevs": 3, 00:08:38.024 "num_base_bdevs_discovered": 2, 00:08:38.024 "num_base_bdevs_operational": 3, 00:08:38.024 "base_bdevs_list": [ 00:08:38.024 { 00:08:38.024 "name": "BaseBdev1", 00:08:38.024 "uuid": "22dd1da7-10ee-11ef-ba60-3508ead7bdda", 00:08:38.024 "is_configured": true, 00:08:38.024 "data_offset": 2048, 00:08:38.024 "data_size": 63488 00:08:38.024 }, 00:08:38.024 { 00:08:38.024 "name": "BaseBdev2", 00:08:38.024 "uuid": "238501b7-10ee-11ef-ba60-3508ead7bdda", 00:08:38.024 "is_configured": true, 00:08:38.024 "data_offset": 2048, 00:08:38.024 "data_size": 63488 00:08:38.024 }, 00:08:38.024 { 00:08:38.024 "name": "BaseBdev3", 00:08:38.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.024 "is_configured": false, 00:08:38.024 "data_offset": 0, 00:08:38.024 "data_size": 0 00:08:38.024 } 00:08:38.024 ] 00:08:38.024 }' 00:08:38.024 06:00:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:38.024 06:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:38.282 06:00:46 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.282 [2024-05-13 06:00:46.515785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.282 [2024-05-13 06:00:46.515837] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b54da00 00:08:38.282 [2024-05-13 06:00:46.515841] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:38.282 [2024-05-13 06:00:46.515856] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b5b0ec0 00:08:38.282 [2024-05-13 06:00:46.515889] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b54da00 00:08:38.282 [2024-05-13 06:00:46.515892] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b54da00 00:08:38.282 [2024-05-13 06:00:46.515906] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.282 BaseBdev3 00:08:38.282 06:00:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:38.282 06:00:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:38.282 06:00:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:38.282 06:00:46 -- common/autotest_common.sh@889 -- # local i 00:08:38.282 06:00:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:38.282 06:00:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:38.282 06:00:46 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:38.540 06:00:46 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.540 [ 00:08:38.540 { 00:08:38.540 "name": "BaseBdev3", 00:08:38.540 "aliases": [ 00:08:38.540 "24116c58-10ee-11ef-ba60-3508ead7bdda" 00:08:38.540 ], 00:08:38.540 "product_name": "Malloc disk", 00:08:38.540 "block_size": 512, 00:08:38.540 "num_blocks": 65536, 00:08:38.540 "uuid": "24116c58-10ee-11ef-ba60-3508ead7bdda", 00:08:38.540 "assigned_rate_limits": { 00:08:38.540 "rw_ios_per_sec": 0, 00:08:38.540 "rw_mbytes_per_sec": 0, 00:08:38.540 "r_mbytes_per_sec": 0, 00:08:38.540 "w_mbytes_per_sec": 0 00:08:38.540 }, 00:08:38.540 "claimed": true, 00:08:38.540 "claim_type": "exclusive_write", 00:08:38.540 "zoned": false, 00:08:38.540 "supported_io_types": { 00:08:38.540 "read": true, 00:08:38.540 "write": true, 00:08:38.540 "unmap": true, 00:08:38.540 "write_zeroes": true, 00:08:38.540 "flush": true, 00:08:38.540 "reset": true, 00:08:38.540 "compare": false, 00:08:38.540 "compare_and_write": false, 00:08:38.540 "abort": true, 00:08:38.540 "nvme_admin": false, 00:08:38.540 "nvme_io": false 00:08:38.540 }, 00:08:38.540 "memory_domains": [ 00:08:38.540 { 00:08:38.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.540 "dma_device_type": 2 00:08:38.540 } 00:08:38.540 ], 00:08:38.540 "driver_specific": {} 00:08:38.540 } 00:08:38.540 ] 00:08:38.540 06:00:46 -- common/autotest_common.sh@895 -- # return 0 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.540 06:00:46 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.798 06:00:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:38.798 "name": "Existed_Raid", 00:08:38.798 "uuid": "232aa6e8-10ee-11ef-ba60-3508ead7bdda", 00:08:38.798 "strip_size_kb": 64, 00:08:38.798 "state": "online", 00:08:38.798 "raid_level": "concat", 00:08:38.798 "superblock": true, 00:08:38.798 "num_base_bdevs": 3, 00:08:38.798 "num_base_bdevs_discovered": 3, 00:08:38.798 "num_base_bdevs_operational": 3, 00:08:38.798 "base_bdevs_list": [ 00:08:38.798 { 00:08:38.798 "name": "BaseBdev1", 00:08:38.798 "uuid": "22dd1da7-10ee-11ef-ba60-3508ead7bdda", 00:08:38.798 "is_configured": true, 00:08:38.798 "data_offset": 2048, 00:08:38.798 "data_size": 63488 00:08:38.798 }, 00:08:38.798 { 00:08:38.798 "name": "BaseBdev2", 00:08:38.798 "uuid": "238501b7-10ee-11ef-ba60-3508ead7bdda", 00:08:38.798 "is_configured": true, 00:08:38.798 "data_offset": 2048, 00:08:38.798 "data_size": 63488 00:08:38.798 }, 00:08:38.798 { 00:08:38.798 "name": "BaseBdev3", 00:08:38.798 "uuid": "24116c58-10ee-11ef-ba60-3508ead7bdda", 00:08:38.798 "is_configured": true, 00:08:38.798 "data_offset": 2048, 00:08:38.798 "data_size": 63488 00:08:38.799 } 00:08:38.799 ] 00:08:38.799 }' 00:08:38.799 06:00:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:38.799 06:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:39.056 06:00:47 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:39.314 [2024-05-13 06:00:47.415934] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.314 [2024-05-13 06:00:47.415952] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.314 [2024-05-13 06:00:47.415961] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:39.314 "name": "Existed_Raid", 00:08:39.314 "uuid": "232aa6e8-10ee-11ef-ba60-3508ead7bdda", 00:08:39.314 "strip_size_kb": 64, 00:08:39.314 "state": "offline", 00:08:39.314 "raid_level": "concat", 00:08:39.314 "superblock": true, 00:08:39.314 "num_base_bdevs": 3, 00:08:39.314 "num_base_bdevs_discovered": 2, 00:08:39.314 "num_base_bdevs_operational": 2, 00:08:39.314 "base_bdevs_list": [ 00:08:39.314 { 00:08:39.314 "name": null, 00:08:39.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.314 "is_configured": false, 00:08:39.314 "data_offset": 2048, 00:08:39.314 "data_size": 63488 00:08:39.314 }, 00:08:39.314 { 00:08:39.314 "name": "BaseBdev2", 00:08:39.314 "uuid": "238501b7-10ee-11ef-ba60-3508ead7bdda", 00:08:39.314 "is_configured": true, 00:08:39.314 "data_offset": 2048, 00:08:39.314 "data_size": 63488 00:08:39.314 }, 00:08:39.314 { 00:08:39.314 "name": "BaseBdev3", 00:08:39.314 "uuid": "24116c58-10ee-11ef-ba60-3508ead7bdda", 00:08:39.314 "is_configured": true, 00:08:39.314 "data_offset": 2048, 00:08:39.314 "data_size": 63488 00:08:39.314 } 00:08:39.314 ] 00:08:39.314 }' 00:08:39.314 06:00:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:39.314 06:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:39.574 06:00:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:39.574 06:00:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:39.574 06:00:47 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:39.574 06:00:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:39.832 06:00:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:39.832 06:00:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.832 06:00:48 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:40.090 [2024-05-13 06:00:48.184827] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.090 06:00:48 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:40.348 [2024-05-13 06:00:48.501551] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.348 [2024-05-13 06:00:48.501572] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b54da00 name Existed_Raid, state offline 00:08:40.348 06:00:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:40.348 06:00:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:40.348 06:00:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.348 06:00:48 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@287 -- # killprocess 50168 00:08:40.607 06:00:48 -- common/autotest_common.sh@926 -- # '[' -z 50168 ']' 00:08:40.607 06:00:48 -- common/autotest_common.sh@930 -- # kill -0 50168 00:08:40.607 06:00:48 -- common/autotest_common.sh@931 -- # uname 00:08:40.607 06:00:48 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:40.607 06:00:48 -- common/autotest_common.sh@934 -- # tail -1 00:08:40.607 06:00:48 -- common/autotest_common.sh@934 -- # ps -c -o command 50168 00:08:40.607 06:00:48 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:40.607 06:00:48 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:40.607 killing process with pid 50168 00:08:40.607 06:00:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50168' 00:08:40.607 06:00:48 -- common/autotest_common.sh@945 -- # kill 50168 00:08:40.607 [2024-05-13 06:00:48.713311] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.607 [2024-05-13 06:00:48.713343] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.607 06:00:48 -- common/autotest_common.sh@950 -- # wait 50168 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:40.607 00:08:40.607 real 0m7.605s 00:08:40.607 user 0m13.223s 00:08:40.607 sys 0m1.270s 00:08:40.607 06:00:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.607 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:40.607 ************************************ 00:08:40.607 END TEST raid_state_function_test_sb 00:08:40.607 ************************************ 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:40.607 06:00:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:40.607 06:00:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.607 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:40.607 ************************************ 00:08:40.607 START TEST raid_superblock_test 00:08:40.607 ************************************ 00:08:40.607 06:00:48 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@357 -- # raid_pid=50404 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:40.607 06:00:48 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50404 /var/tmp/spdk-raid.sock 00:08:40.607 06:00:48 -- common/autotest_common.sh@819 -- # '[' -z 50404 ']' 00:08:40.607 06:00:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:40.607 06:00:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:40.607 06:00:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:40.866 06:00:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.866 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:40.866 [2024-05-13 06:00:48.922782] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:40.866 [2024-05-13 06:00:48.923012] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:41.126 EAL: TSC is not safe to use in SMP mode 00:08:41.126 EAL: TSC is not invariant 00:08:41.126 [2024-05-13 06:00:49.339039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.126 [2024-05-13 06:00:49.425571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.126 [2024-05-13 06:00:49.425983] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.126 [2024-05-13 06:00:49.425994] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.695 06:00:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:41.695 06:00:49 -- common/autotest_common.sh@852 -- # return 0 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:41.695 malloc1 00:08:41.695 06:00:49 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.956 [2024-05-13 06:00:50.125144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.956 [2024-05-13 06:00:50.125189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.956 [2024-05-13 06:00:50.125703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e11780 00:08:41.956 [2024-05-13 06:00:50.125728] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.956 [2024-05-13 06:00:50.126376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.956 [2024-05-13 06:00:50.126411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.956 pt1 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:41.956 06:00:50 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:42.216 malloc2 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.216 [2024-05-13 06:00:50.457217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.216 [2024-05-13 06:00:50.457275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.216 [2024-05-13 06:00:50.457297] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e11c80 00:08:42.216 [2024-05-13 06:00:50.457304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.216 [2024-05-13 06:00:50.457706] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.216 [2024-05-13 06:00:50.457734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.216 pt2 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.216 06:00:50 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:08:42.475 malloc3 00:08:42.475 06:00:50 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.734 [2024-05-13 06:00:50.801297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.734 [2024-05-13 06:00:50.801338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.734 [2024-05-13 06:00:50.801359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e12180 00:08:42.734 [2024-05-13 06:00:50.801393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.734 [2024-05-13 06:00:50.801825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.734 [2024-05-13 06:00:50.801858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.734 pt3 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:08:42.734 [2024-05-13 06:00:50.973341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.734 [2024-05-13 06:00:50.973690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.734 [2024-05-13 06:00:50.973712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.734 [2024-05-13 06:00:50.973760] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e12400 00:08:42.734 [2024-05-13 06:00:50.973767] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.734 [2024-05-13 06:00:50.973790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e74e20 00:08:42.734 [2024-05-13 06:00:50.973838] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e12400 00:08:42.734 [2024-05-13 06:00:50.973842] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e12400 00:08:42.734 [2024-05-13 06:00:50.973876] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.734 06:00:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.994 06:00:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:42.994 "name": "raid_bdev1", 00:08:42.994 "uuid": "26b999f3-10ee-11ef-ba60-3508ead7bdda", 00:08:42.994 "strip_size_kb": 64, 00:08:42.994 "state": "online", 00:08:42.994 "raid_level": "concat", 00:08:42.994 "superblock": true, 00:08:42.994 "num_base_bdevs": 3, 00:08:42.994 "num_base_bdevs_discovered": 3, 00:08:42.994 "num_base_bdevs_operational": 3, 00:08:42.994 "base_bdevs_list": [ 00:08:42.994 { 00:08:42.994 "name": "pt1", 00:08:42.994 "uuid": "46069e9e-24dd-985a-ac9d-91dcdd463998", 00:08:42.994 "is_configured": true, 00:08:42.994 "data_offset": 2048, 00:08:42.994 "data_size": 63488 00:08:42.994 }, 00:08:42.994 { 00:08:42.994 "name": "pt2", 00:08:42.994 "uuid": "a40e84d3-0e01-6959-bd2c-16272bb75380", 00:08:42.994 "is_configured": true, 00:08:42.994 "data_offset": 2048, 00:08:42.994 "data_size": 63488 00:08:42.994 }, 00:08:42.994 { 00:08:42.994 "name": "pt3", 00:08:42.994 "uuid": "297562f6-8ab9-8b5e-a2d6-7346f8438555", 00:08:42.994 "is_configured": true, 00:08:42.994 "data_offset": 2048, 00:08:42.994 "data_size": 63488 00:08:42.994 } 00:08:42.994 ] 00:08:42.994 }' 00:08:42.994 06:00:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:42.994 06:00:51 -- common/autotest_common.sh@10 -- # set +x 00:08:43.253 06:00:51 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:43.253 06:00:51 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:08:43.512 [2024-05-13 06:00:51.569479] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.512 06:00:51 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=26b999f3-10ee-11ef-ba60-3508ead7bdda 00:08:43.512 06:00:51 -- bdev/bdev_raid.sh@380 -- # '[' -z 26b999f3-10ee-11ef-ba60-3508ead7bdda ']' 00:08:43.512 06:00:51 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:43.512 [2024-05-13 06:00:51.741490] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.512 [2024-05-13 06:00:51.741506] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.512 [2024-05-13 06:00:51.741521] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.512 [2024-05-13 06:00:51.741531] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.512 [2024-05-13 06:00:51.741534] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e12400 name raid_bdev1, state offline 00:08:43.512 06:00:51 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:08:43.512 06:00:51 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.770 06:00:51 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:08:43.770 06:00:51 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:08:43.770 06:00:51 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.770 06:00:51 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:44.029 06:00:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.029 06:00:52 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:44.029 06:00:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:08:44.029 06:00:52 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:08:44.288 06:00:52 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:44.288 06:00:52 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:44.288 06:00:52 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:08:44.288 06:00:52 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:44.288 06:00:52 -- common/autotest_common.sh@640 -- # local es=0 00:08:44.288 06:00:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:44.288 06:00:52 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.288 06:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:44.288 06:00:52 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.288 06:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:44.288 06:00:52 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.288 06:00:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:44.288 06:00:52 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.288 06:00:52 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:44.288 06:00:52 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:08:44.547 [2024-05-13 06:00:52.733722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:44.547 [2024-05-13 06:00:52.734169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:44.547 [2024-05-13 06:00:52.734189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:44.547 [2024-05-13 06:00:52.734201] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:08:44.547 [2024-05-13 06:00:52.734235] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:08:44.547 [2024-05-13 06:00:52.734244] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:08:44.547 [2024-05-13 06:00:52.734267] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.547 [2024-05-13 06:00:52.734271] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e12180 name raid_bdev1, state configuring 00:08:44.547 request: 00:08:44.547 { 00:08:44.547 "name": "raid_bdev1", 00:08:44.547 "raid_level": "concat", 00:08:44.547 "base_bdevs": [ 00:08:44.547 "malloc1", 00:08:44.547 "malloc2", 00:08:44.547 "malloc3" 00:08:44.547 ], 00:08:44.547 "superblock": false, 00:08:44.547 "strip_size_kb": 64, 00:08:44.547 "method": "bdev_raid_create", 00:08:44.547 "req_id": 1 00:08:44.547 } 00:08:44.547 Got JSON-RPC error response 00:08:44.547 response: 00:08:44.547 { 00:08:44.547 "code": -17, 00:08:44.547 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:44.547 } 00:08:44.547 06:00:52 -- common/autotest_common.sh@643 -- # es=1 00:08:44.547 06:00:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:44.547 06:00:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:44.547 06:00:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:44.547 06:00:52 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.547 06:00:52 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:08:44.806 06:00:52 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:08:44.806 06:00:52 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:08:44.806 06:00:52 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.806 [2024-05-13 06:00:53.077796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.806 [2024-05-13 06:00:53.077832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.806 [2024-05-13 06:00:53.077855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e11c80 00:08:44.806 [2024-05-13 06:00:53.077862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.806 [2024-05-13 06:00:53.078327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.806 [2024-05-13 06:00:53.078360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.806 [2024-05-13 06:00:53.078377] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:08:44.807 [2024-05-13 06:00:53.078386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.807 pt1 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.807 06:00:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.066 06:00:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:45.066 "name": "raid_bdev1", 00:08:45.066 "uuid": "26b999f3-10ee-11ef-ba60-3508ead7bdda", 00:08:45.066 "strip_size_kb": 64, 00:08:45.066 "state": "configuring", 00:08:45.066 "raid_level": "concat", 00:08:45.066 "superblock": true, 00:08:45.066 "num_base_bdevs": 3, 00:08:45.066 "num_base_bdevs_discovered": 1, 00:08:45.066 "num_base_bdevs_operational": 3, 00:08:45.066 "base_bdevs_list": [ 00:08:45.066 { 00:08:45.066 "name": "pt1", 00:08:45.066 "uuid": "46069e9e-24dd-985a-ac9d-91dcdd463998", 00:08:45.066 "is_configured": true, 00:08:45.066 "data_offset": 2048, 00:08:45.066 "data_size": 63488 00:08:45.066 }, 00:08:45.066 { 00:08:45.066 "name": null, 00:08:45.066 "uuid": "a40e84d3-0e01-6959-bd2c-16272bb75380", 00:08:45.066 "is_configured": false, 00:08:45.066 "data_offset": 2048, 00:08:45.066 "data_size": 63488 00:08:45.066 }, 00:08:45.066 { 00:08:45.066 "name": null, 00:08:45.066 "uuid": "297562f6-8ab9-8b5e-a2d6-7346f8438555", 00:08:45.066 "is_configured": false, 00:08:45.066 "data_offset": 2048, 00:08:45.066 "data_size": 63488 00:08:45.066 } 00:08:45.066 ] 00:08:45.066 }' 00:08:45.066 06:00:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:45.066 06:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:45.325 06:00:53 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:08:45.325 06:00:53 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:45.583 [2024-05-13 06:00:53.669920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:45.583 [2024-05-13 06:00:53.669974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.583 [2024-05-13 06:00:53.669996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e12680 00:08:45.583 [2024-05-13 06:00:53.670002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.583 [2024-05-13 06:00:53.670073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.583 [2024-05-13 06:00:53.670081] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:45.583 [2024-05-13 06:00:53.670094] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:45.583 [2024-05-13 06:00:53.670100] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:45.583 pt2 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:45.583 [2024-05-13 06:00:53.817947] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:45.583 06:00:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.584 06:00:53 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.841 06:00:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:45.841 "name": "raid_bdev1", 00:08:45.841 "uuid": "26b999f3-10ee-11ef-ba60-3508ead7bdda", 00:08:45.841 "strip_size_kb": 64, 00:08:45.841 "state": "configuring", 00:08:45.841 "raid_level": "concat", 00:08:45.841 "superblock": true, 00:08:45.841 "num_base_bdevs": 3, 00:08:45.841 "num_base_bdevs_discovered": 1, 00:08:45.841 "num_base_bdevs_operational": 3, 00:08:45.841 "base_bdevs_list": [ 00:08:45.841 { 00:08:45.841 "name": "pt1", 00:08:45.841 "uuid": "46069e9e-24dd-985a-ac9d-91dcdd463998", 00:08:45.841 "is_configured": true, 00:08:45.841 "data_offset": 2048, 00:08:45.841 "data_size": 63488 00:08:45.841 }, 00:08:45.841 { 00:08:45.841 "name": null, 00:08:45.841 "uuid": "a40e84d3-0e01-6959-bd2c-16272bb75380", 00:08:45.841 "is_configured": false, 00:08:45.841 "data_offset": 2048, 00:08:45.841 "data_size": 63488 00:08:45.841 }, 00:08:45.841 { 00:08:45.841 "name": null, 00:08:45.841 "uuid": "297562f6-8ab9-8b5e-a2d6-7346f8438555", 00:08:45.841 "is_configured": false, 00:08:45.841 "data_offset": 2048, 00:08:45.841 "data_size": 63488 00:08:45.841 } 00:08:45.841 ] 00:08:45.841 }' 00:08:45.841 06:00:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:45.841 06:00:54 -- common/autotest_common.sh@10 -- # set +x 00:08:46.100 06:00:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:08:46.100 06:00:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:46.100 06:00:54 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:46.359 [2024-05-13 06:00:54.422076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:46.359 [2024-05-13 06:00:54.422114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.359 [2024-05-13 06:00:54.422149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e12680 00:08:46.359 [2024-05-13 06:00:54.422156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.359 [2024-05-13 06:00:54.422222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.359 [2024-05-13 06:00:54.422229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:46.359 [2024-05-13 06:00:54.422243] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:08:46.359 [2024-05-13 06:00:54.422248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:46.359 pt2 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:46.359 [2024-05-13 06:00:54.594113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:46.359 [2024-05-13 06:00:54.594149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.359 [2024-05-13 06:00:54.594164] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x829e12400 00:08:46.359 [2024-05-13 06:00:54.594169] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.359 [2024-05-13 06:00:54.594238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.359 [2024-05-13 06:00:54.594245] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:46.359 [2024-05-13 06:00:54.594257] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:08:46.359 [2024-05-13 06:00:54.594273] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:46.359 [2024-05-13 06:00:54.594292] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x829e11780 00:08:46.359 [2024-05-13 06:00:54.594295] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:46.359 [2024-05-13 06:00:54.594309] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x829e74e20 00:08:46.359 [2024-05-13 06:00:54.594345] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x829e11780 00:08:46.359 [2024-05-13 06:00:54.594348] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x829e11780 00:08:46.359 [2024-05-13 06:00:54.594362] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.359 pt3 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.359 06:00:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.618 06:00:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:46.618 "name": "raid_bdev1", 00:08:46.618 "uuid": "26b999f3-10ee-11ef-ba60-3508ead7bdda", 00:08:46.618 "strip_size_kb": 64, 00:08:46.618 "state": "online", 00:08:46.618 "raid_level": "concat", 00:08:46.618 "superblock": true, 00:08:46.618 "num_base_bdevs": 3, 00:08:46.618 "num_base_bdevs_discovered": 3, 00:08:46.618 "num_base_bdevs_operational": 3, 00:08:46.618 "base_bdevs_list": [ 00:08:46.618 { 00:08:46.618 "name": "pt1", 00:08:46.618 "uuid": "46069e9e-24dd-985a-ac9d-91dcdd463998", 00:08:46.618 "is_configured": true, 00:08:46.618 "data_offset": 2048, 00:08:46.618 "data_size": 63488 00:08:46.618 }, 00:08:46.618 { 00:08:46.618 "name": "pt2", 00:08:46.618 "uuid": "a40e84d3-0e01-6959-bd2c-16272bb75380", 00:08:46.618 "is_configured": true, 00:08:46.618 "data_offset": 2048, 00:08:46.618 "data_size": 63488 00:08:46.618 }, 00:08:46.618 { 00:08:46.618 "name": "pt3", 00:08:46.618 "uuid": "297562f6-8ab9-8b5e-a2d6-7346f8438555", 00:08:46.618 "is_configured": true, 00:08:46.618 "data_offset": 2048, 00:08:46.619 "data_size": 63488 00:08:46.619 } 00:08:46.619 ] 00:08:46.619 }' 00:08:46.619 06:00:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:46.619 06:00:54 -- common/autotest_common.sh@10 -- # set +x 00:08:46.877 06:00:55 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:46.877 06:00:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:08:47.135 [2024-05-13 06:00:55.198259] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@430 -- # '[' 26b999f3-10ee-11ef-ba60-3508ead7bdda '!=' 26b999f3-10ee-11ef-ba60-3508ead7bdda ']' 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@197 -- # return 1 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@511 -- # killprocess 50404 00:08:47.135 06:00:55 -- common/autotest_common.sh@926 -- # '[' -z 50404 ']' 00:08:47.135 06:00:55 -- common/autotest_common.sh@930 -- # kill -0 50404 00:08:47.135 06:00:55 -- common/autotest_common.sh@931 -- # uname 00:08:47.135 06:00:55 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:47.135 06:00:55 -- common/autotest_common.sh@934 -- # ps -c -o command 50404 00:08:47.135 06:00:55 -- common/autotest_common.sh@934 -- # tail -1 00:08:47.135 06:00:55 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:47.135 06:00:55 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:47.135 killing process with pid 50404 00:08:47.135 06:00:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50404' 00:08:47.135 06:00:55 -- common/autotest_common.sh@945 -- # kill 50404 00:08:47.135 [2024-05-13 06:00:55.228699] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:47.135 [2024-05-13 06:00:55.228713] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.135 [2024-05-13 06:00:55.228733] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:47.135 [2024-05-13 06:00:55.228737] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x829e11780 name raid_bdev1, state offline 00:08:47.135 06:00:55 -- common/autotest_common.sh@950 -- # wait 50404 00:08:47.135 [2024-05-13 06:00:55.242675] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:08:47.135 00:08:47.135 real 0m6.469s 00:08:47.135 user 0m11.030s 00:08:47.135 sys 0m1.218s 00:08:47.135 06:00:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.135 06:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:47.135 ************************************ 00:08:47.135 END TEST raid_superblock_test 00:08:47.135 ************************************ 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:08:47.135 06:00:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:47.135 06:00:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:47.136 06:00:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.136 06:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:47.136 ************************************ 00:08:47.136 START TEST raid_state_function_test 00:08:47.136 ************************************ 00:08:47.136 06:00:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:47.136 06:00:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=50585 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50585' 00:08:47.394 Process raid pid: 50585 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:47.394 06:00:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50585 /var/tmp/spdk-raid.sock 00:08:47.394 06:00:55 -- common/autotest_common.sh@819 -- # '[' -z 50585 ']' 00:08:47.394 06:00:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:47.394 06:00:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:47.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:47.394 06:00:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:47.394 06:00:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:47.394 06:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:47.394 [2024-05-13 06:00:55.458187] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:47.394 [2024-05-13 06:00:55.458449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:47.653 EAL: TSC is not safe to use in SMP mode 00:08:47.653 EAL: TSC is not invariant 00:08:47.653 [2024-05-13 06:00:55.875567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.653 [2024-05-13 06:00:55.960349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.653 [2024-05-13 06:00:55.960758] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.653 [2024-05-13 06:00:55.960769] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.221 06:00:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:48.221 06:00:56 -- common/autotest_common.sh@852 -- # return 0 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:48.221 [2024-05-13 06:00:56.491872] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.221 [2024-05-13 06:00:56.491918] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.221 [2024-05-13 06:00:56.491922] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.221 [2024-05-13 06:00:56.491928] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.221 [2024-05-13 06:00:56.491930] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.221 [2024-05-13 06:00:56.491936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:48.221 06:00:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:48.222 06:00:56 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.222 06:00:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.480 06:00:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:48.480 "name": "Existed_Raid", 00:08:48.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.480 "strip_size_kb": 0, 00:08:48.480 "state": "configuring", 00:08:48.480 "raid_level": "raid1", 00:08:48.480 "superblock": false, 00:08:48.480 "num_base_bdevs": 3, 00:08:48.480 "num_base_bdevs_discovered": 0, 00:08:48.480 "num_base_bdevs_operational": 3, 00:08:48.480 "base_bdevs_list": [ 00:08:48.480 { 00:08:48.480 "name": "BaseBdev1", 00:08:48.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.480 "is_configured": false, 00:08:48.480 "data_offset": 0, 00:08:48.480 "data_size": 0 00:08:48.480 }, 00:08:48.480 { 00:08:48.480 "name": "BaseBdev2", 00:08:48.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.480 "is_configured": false, 00:08:48.480 "data_offset": 0, 00:08:48.480 "data_size": 0 00:08:48.480 }, 00:08:48.480 { 00:08:48.480 "name": "BaseBdev3", 00:08:48.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.480 "is_configured": false, 00:08:48.480 "data_offset": 0, 00:08:48.480 "data_size": 0 00:08:48.480 } 00:08:48.480 ] 00:08:48.480 }' 00:08:48.480 06:00:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:48.480 06:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:48.739 06:00:56 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:48.997 [2024-05-13 06:00:57.099974] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.997 [2024-05-13 06:00:57.099992] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b423500 name Existed_Raid, state configuring 00:08:48.997 06:00:57 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:48.997 [2024-05-13 06:00:57.272012] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.997 [2024-05-13 06:00:57.272045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.997 [2024-05-13 06:00:57.272048] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.997 [2024-05-13 06:00:57.272054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.997 [2024-05-13 06:00:57.272056] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.997 [2024-05-13 06:00:57.272061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.997 06:00:57 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.256 [2024-05-13 06:00:57.448798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.256 BaseBdev1 00:08:49.256 06:00:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:49.256 06:00:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:49.256 06:00:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:49.256 06:00:57 -- common/autotest_common.sh@889 -- # local i 00:08:49.256 06:00:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:49.256 06:00:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:49.256 06:00:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:49.545 06:00:57 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.546 [ 00:08:49.546 { 00:08:49.546 "name": "BaseBdev1", 00:08:49.546 "aliases": [ 00:08:49.546 "2a9590b0-10ee-11ef-ba60-3508ead7bdda" 00:08:49.546 ], 00:08:49.546 "product_name": "Malloc disk", 00:08:49.546 "block_size": 512, 00:08:49.546 "num_blocks": 65536, 00:08:49.546 "uuid": "2a9590b0-10ee-11ef-ba60-3508ead7bdda", 00:08:49.546 "assigned_rate_limits": { 00:08:49.546 "rw_ios_per_sec": 0, 00:08:49.546 "rw_mbytes_per_sec": 0, 00:08:49.546 "r_mbytes_per_sec": 0, 00:08:49.546 "w_mbytes_per_sec": 0 00:08:49.546 }, 00:08:49.546 "claimed": true, 00:08:49.546 "claim_type": "exclusive_write", 00:08:49.546 "zoned": false, 00:08:49.546 "supported_io_types": { 00:08:49.546 "read": true, 00:08:49.546 "write": true, 00:08:49.546 "unmap": true, 00:08:49.546 "write_zeroes": true, 00:08:49.546 "flush": true, 00:08:49.546 "reset": true, 00:08:49.546 "compare": false, 00:08:49.546 "compare_and_write": false, 00:08:49.546 "abort": true, 00:08:49.546 "nvme_admin": false, 00:08:49.546 "nvme_io": false 00:08:49.546 }, 00:08:49.546 "memory_domains": [ 00:08:49.546 { 00:08:49.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.546 "dma_device_type": 2 00:08:49.546 } 00:08:49.546 ], 00:08:49.546 "driver_specific": {} 00:08:49.546 } 00:08:49.546 ] 00:08:49.546 06:00:57 -- common/autotest_common.sh@895 -- # return 0 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.546 06:00:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.815 06:00:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:49.815 "name": "Existed_Raid", 00:08:49.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.815 "strip_size_kb": 0, 00:08:49.815 "state": "configuring", 00:08:49.815 "raid_level": "raid1", 00:08:49.815 "superblock": false, 00:08:49.815 "num_base_bdevs": 3, 00:08:49.815 "num_base_bdevs_discovered": 1, 00:08:49.815 "num_base_bdevs_operational": 3, 00:08:49.815 "base_bdevs_list": [ 00:08:49.815 { 00:08:49.815 "name": "BaseBdev1", 00:08:49.815 "uuid": "2a9590b0-10ee-11ef-ba60-3508ead7bdda", 00:08:49.815 "is_configured": true, 00:08:49.815 "data_offset": 0, 00:08:49.815 "data_size": 65536 00:08:49.815 }, 00:08:49.815 { 00:08:49.815 "name": "BaseBdev2", 00:08:49.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.815 "is_configured": false, 00:08:49.815 "data_offset": 0, 00:08:49.815 "data_size": 0 00:08:49.815 }, 00:08:49.815 { 00:08:49.815 "name": "BaseBdev3", 00:08:49.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.815 "is_configured": false, 00:08:49.815 "data_offset": 0, 00:08:49.815 "data_size": 0 00:08:49.815 } 00:08:49.815 ] 00:08:49.815 }' 00:08:49.815 06:00:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:49.815 06:00:57 -- common/autotest_common.sh@10 -- # set +x 00:08:50.075 06:00:58 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:50.075 [2024-05-13 06:00:58.380223] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.075 [2024-05-13 06:00:58.380243] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b423500 name Existed_Raid, state configuring 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:50.335 [2024-05-13 06:00:58.552264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.335 [2024-05-13 06:00:58.552884] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.335 [2024-05-13 06:00:58.552927] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.335 [2024-05-13 06:00:58.552931] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:50.335 [2024-05-13 06:00:58.552937] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.335 06:00:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.595 06:00:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:50.595 "name": "Existed_Raid", 00:08:50.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.595 "strip_size_kb": 0, 00:08:50.595 "state": "configuring", 00:08:50.595 "raid_level": "raid1", 00:08:50.595 "superblock": false, 00:08:50.595 "num_base_bdevs": 3, 00:08:50.595 "num_base_bdevs_discovered": 1, 00:08:50.595 "num_base_bdevs_operational": 3, 00:08:50.595 "base_bdevs_list": [ 00:08:50.595 { 00:08:50.595 "name": "BaseBdev1", 00:08:50.595 "uuid": "2a9590b0-10ee-11ef-ba60-3508ead7bdda", 00:08:50.595 "is_configured": true, 00:08:50.595 "data_offset": 0, 00:08:50.595 "data_size": 65536 00:08:50.595 }, 00:08:50.595 { 00:08:50.595 "name": "BaseBdev2", 00:08:50.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.595 "is_configured": false, 00:08:50.595 "data_offset": 0, 00:08:50.595 "data_size": 0 00:08:50.595 }, 00:08:50.595 { 00:08:50.595 "name": "BaseBdev3", 00:08:50.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.595 "is_configured": false, 00:08:50.595 "data_offset": 0, 00:08:50.595 "data_size": 0 00:08:50.595 } 00:08:50.595 ] 00:08:50.595 }' 00:08:50.595 06:00:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:50.595 06:00:58 -- common/autotest_common.sh@10 -- # set +x 00:08:50.853 06:00:58 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.853 [2024-05-13 06:00:59.156489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.853 BaseBdev2 00:08:51.110 06:00:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:51.110 06:00:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:51.110 06:00:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:51.110 06:00:59 -- common/autotest_common.sh@889 -- # local i 00:08:51.110 06:00:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:51.110 06:00:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:51.110 06:00:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:51.111 06:00:59 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.369 [ 00:08:51.369 { 00:08:51.369 "name": "BaseBdev2", 00:08:51.369 "aliases": [ 00:08:51.369 "2b9a3cd5-10ee-11ef-ba60-3508ead7bdda" 00:08:51.369 ], 00:08:51.369 "product_name": "Malloc disk", 00:08:51.369 "block_size": 512, 00:08:51.369 "num_blocks": 65536, 00:08:51.370 "uuid": "2b9a3cd5-10ee-11ef-ba60-3508ead7bdda", 00:08:51.370 "assigned_rate_limits": { 00:08:51.370 "rw_ios_per_sec": 0, 00:08:51.370 "rw_mbytes_per_sec": 0, 00:08:51.370 "r_mbytes_per_sec": 0, 00:08:51.370 "w_mbytes_per_sec": 0 00:08:51.370 }, 00:08:51.370 "claimed": true, 00:08:51.370 "claim_type": "exclusive_write", 00:08:51.370 "zoned": false, 00:08:51.370 "supported_io_types": { 00:08:51.370 "read": true, 00:08:51.370 "write": true, 00:08:51.370 "unmap": true, 00:08:51.370 "write_zeroes": true, 00:08:51.370 "flush": true, 00:08:51.370 "reset": true, 00:08:51.370 "compare": false, 00:08:51.370 "compare_and_write": false, 00:08:51.370 "abort": true, 00:08:51.370 "nvme_admin": false, 00:08:51.370 "nvme_io": false 00:08:51.370 }, 00:08:51.370 "memory_domains": [ 00:08:51.370 { 00:08:51.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.370 "dma_device_type": 2 00:08:51.370 } 00:08:51.370 ], 00:08:51.370 "driver_specific": {} 00:08:51.370 } 00:08:51.370 ] 00:08:51.370 06:00:59 -- common/autotest_common.sh@895 -- # return 0 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.370 06:00:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:51.629 06:00:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:51.629 "name": "Existed_Raid", 00:08:51.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.629 "strip_size_kb": 0, 00:08:51.629 "state": "configuring", 00:08:51.629 "raid_level": "raid1", 00:08:51.629 "superblock": false, 00:08:51.629 "num_base_bdevs": 3, 00:08:51.629 "num_base_bdevs_discovered": 2, 00:08:51.629 "num_base_bdevs_operational": 3, 00:08:51.629 "base_bdevs_list": [ 00:08:51.629 { 00:08:51.629 "name": "BaseBdev1", 00:08:51.629 "uuid": "2a9590b0-10ee-11ef-ba60-3508ead7bdda", 00:08:51.629 "is_configured": true, 00:08:51.629 "data_offset": 0, 00:08:51.629 "data_size": 65536 00:08:51.629 }, 00:08:51.629 { 00:08:51.629 "name": "BaseBdev2", 00:08:51.629 "uuid": "2b9a3cd5-10ee-11ef-ba60-3508ead7bdda", 00:08:51.629 "is_configured": true, 00:08:51.629 "data_offset": 0, 00:08:51.629 "data_size": 65536 00:08:51.629 }, 00:08:51.629 { 00:08:51.629 "name": "BaseBdev3", 00:08:51.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.629 "is_configured": false, 00:08:51.629 "data_offset": 0, 00:08:51.629 "data_size": 0 00:08:51.629 } 00:08:51.629 ] 00:08:51.629 }' 00:08:51.629 06:00:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:51.629 06:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:51.629 06:00:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.888 [2024-05-13 06:01:00.072627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.888 [2024-05-13 06:01:00.072646] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b423a00 00:08:51.888 [2024-05-13 06:01:00.072649] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:51.888 [2024-05-13 06:01:00.072664] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b486ec0 00:08:51.888 [2024-05-13 06:01:00.072735] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b423a00 00:08:51.888 [2024-05-13 06:01:00.072738] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b423a00 00:08:51.888 [2024-05-13 06:01:00.072759] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.888 BaseBdev3 00:08:51.888 06:01:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:51.888 06:01:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:51.888 06:01:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:51.888 06:01:00 -- common/autotest_common.sh@889 -- # local i 00:08:51.888 06:01:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:51.888 06:01:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:51.888 06:01:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:52.145 06:01:00 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:52.145 [ 00:08:52.145 { 00:08:52.145 "name": "BaseBdev3", 00:08:52.145 "aliases": [ 00:08:52.145 "2c2608d2-10ee-11ef-ba60-3508ead7bdda" 00:08:52.145 ], 00:08:52.145 "product_name": "Malloc disk", 00:08:52.145 "block_size": 512, 00:08:52.145 "num_blocks": 65536, 00:08:52.145 "uuid": "2c2608d2-10ee-11ef-ba60-3508ead7bdda", 00:08:52.145 "assigned_rate_limits": { 00:08:52.145 "rw_ios_per_sec": 0, 00:08:52.145 "rw_mbytes_per_sec": 0, 00:08:52.145 "r_mbytes_per_sec": 0, 00:08:52.145 "w_mbytes_per_sec": 0 00:08:52.145 }, 00:08:52.145 "claimed": true, 00:08:52.145 "claim_type": "exclusive_write", 00:08:52.145 "zoned": false, 00:08:52.145 "supported_io_types": { 00:08:52.145 "read": true, 00:08:52.145 "write": true, 00:08:52.145 "unmap": true, 00:08:52.145 "write_zeroes": true, 00:08:52.145 "flush": true, 00:08:52.145 "reset": true, 00:08:52.145 "compare": false, 00:08:52.145 "compare_and_write": false, 00:08:52.145 "abort": true, 00:08:52.145 "nvme_admin": false, 00:08:52.145 "nvme_io": false 00:08:52.145 }, 00:08:52.145 "memory_domains": [ 00:08:52.145 { 00:08:52.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.145 "dma_device_type": 2 00:08:52.145 } 00:08:52.145 ], 00:08:52.145 "driver_specific": {} 00:08:52.145 } 00:08:52.145 ] 00:08:52.145 06:01:00 -- common/autotest_common.sh@895 -- # return 0 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.145 06:01:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.404 06:01:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:52.404 "name": "Existed_Raid", 00:08:52.404 "uuid": "2c260c06-10ee-11ef-ba60-3508ead7bdda", 00:08:52.404 "strip_size_kb": 0, 00:08:52.404 "state": "online", 00:08:52.404 "raid_level": "raid1", 00:08:52.404 "superblock": false, 00:08:52.404 "num_base_bdevs": 3, 00:08:52.404 "num_base_bdevs_discovered": 3, 00:08:52.404 "num_base_bdevs_operational": 3, 00:08:52.404 "base_bdevs_list": [ 00:08:52.404 { 00:08:52.404 "name": "BaseBdev1", 00:08:52.404 "uuid": "2a9590b0-10ee-11ef-ba60-3508ead7bdda", 00:08:52.404 "is_configured": true, 00:08:52.404 "data_offset": 0, 00:08:52.404 "data_size": 65536 00:08:52.404 }, 00:08:52.404 { 00:08:52.404 "name": "BaseBdev2", 00:08:52.404 "uuid": "2b9a3cd5-10ee-11ef-ba60-3508ead7bdda", 00:08:52.404 "is_configured": true, 00:08:52.404 "data_offset": 0, 00:08:52.404 "data_size": 65536 00:08:52.404 }, 00:08:52.404 { 00:08:52.404 "name": "BaseBdev3", 00:08:52.404 "uuid": "2c2608d2-10ee-11ef-ba60-3508ead7bdda", 00:08:52.404 "is_configured": true, 00:08:52.404 "data_offset": 0, 00:08:52.404 "data_size": 65536 00:08:52.404 } 00:08:52.404 ] 00:08:52.404 }' 00:08:52.404 06:01:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:52.404 06:01:00 -- common/autotest_common.sh@10 -- # set +x 00:08:52.663 06:01:00 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:52.922 [2024-05-13 06:01:01.004761] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:52.922 "name": "Existed_Raid", 00:08:52.922 "uuid": "2c260c06-10ee-11ef-ba60-3508ead7bdda", 00:08:52.922 "strip_size_kb": 0, 00:08:52.922 "state": "online", 00:08:52.922 "raid_level": "raid1", 00:08:52.922 "superblock": false, 00:08:52.922 "num_base_bdevs": 3, 00:08:52.922 "num_base_bdevs_discovered": 2, 00:08:52.922 "num_base_bdevs_operational": 2, 00:08:52.922 "base_bdevs_list": [ 00:08:52.922 { 00:08:52.922 "name": null, 00:08:52.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.922 "is_configured": false, 00:08:52.922 "data_offset": 0, 00:08:52.922 "data_size": 65536 00:08:52.922 }, 00:08:52.922 { 00:08:52.922 "name": "BaseBdev2", 00:08:52.922 "uuid": "2b9a3cd5-10ee-11ef-ba60-3508ead7bdda", 00:08:52.922 "is_configured": true, 00:08:52.922 "data_offset": 0, 00:08:52.922 "data_size": 65536 00:08:52.922 }, 00:08:52.922 { 00:08:52.922 "name": "BaseBdev3", 00:08:52.922 "uuid": "2c2608d2-10ee-11ef-ba60-3508ead7bdda", 00:08:52.922 "is_configured": true, 00:08:52.922 "data_offset": 0, 00:08:52.922 "data_size": 65536 00:08:52.922 } 00:08:52.922 ] 00:08:52.922 }' 00:08:52.922 06:01:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:52.922 06:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:53.181 06:01:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:08:53.181 06:01:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:53.181 06:01:01 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.181 06:01:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:53.440 06:01:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:53.440 06:01:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.440 06:01:01 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:53.699 [2024-05-13 06:01:01.765591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:53.699 06:01:01 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:08:53.959 [2024-05-13 06:01:02.090303] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.959 [2024-05-13 06:01:02.090318] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.959 [2024-05-13 06:01:02.090326] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.959 [2024-05-13 06:01:02.095008] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.959 [2024-05-13 06:01:02.095025] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b423a00 name Existed_Raid, state offline 00:08:53.959 06:01:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:08:53.959 06:01:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:08:53.959 06:01:02 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.959 06:01:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@287 -- # killprocess 50585 00:08:54.225 06:01:02 -- common/autotest_common.sh@926 -- # '[' -z 50585 ']' 00:08:54.225 06:01:02 -- common/autotest_common.sh@930 -- # kill -0 50585 00:08:54.225 06:01:02 -- common/autotest_common.sh@931 -- # uname 00:08:54.225 06:01:02 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:08:54.225 06:01:02 -- common/autotest_common.sh@934 -- # ps -c -o command 50585 00:08:54.225 06:01:02 -- common/autotest_common.sh@934 -- # tail -1 00:08:54.225 06:01:02 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:08:54.225 06:01:02 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:08:54.225 killing process with pid 50585 00:08:54.225 06:01:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50585' 00:08:54.225 06:01:02 -- common/autotest_common.sh@945 -- # kill 50585 00:08:54.225 [2024-05-13 06:01:02.289624] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.225 [2024-05-13 06:01:02.289658] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.225 06:01:02 -- common/autotest_common.sh@950 -- # wait 50585 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:08:54.225 00:08:54.225 real 0m6.987s 00:08:54.225 user 0m11.947s 00:08:54.225 sys 0m1.348s 00:08:54.225 06:01:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.225 06:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:54.225 ************************************ 00:08:54.225 END TEST raid_state_function_test 00:08:54.225 ************************************ 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:54.225 06:01:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:54.225 06:01:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.225 06:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:54.225 ************************************ 00:08:54.225 START TEST raid_state_function_test_sb 00:08:54.225 ************************************ 00:08:54.225 06:01:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=50818 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50818' 00:08:54.225 Process raid pid: 50818 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:54.225 06:01:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50818 /var/tmp/spdk-raid.sock 00:08:54.225 06:01:02 -- common/autotest_common.sh@819 -- # '[' -z 50818 ']' 00:08:54.225 06:01:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:54.225 06:01:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:54.225 06:01:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:54.225 06:01:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.225 06:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:54.225 [2024-05-13 06:01:02.503413] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:54.225 [2024-05-13 06:01:02.503692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:08:54.791 EAL: TSC is not safe to use in SMP mode 00:08:54.791 EAL: TSC is not invariant 00:08:54.791 [2024-05-13 06:01:02.920683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.791 [2024-05-13 06:01:03.006658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.791 [2024-05-13 06:01:03.007070] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.791 [2024-05-13 06:01:03.007080] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.358 06:01:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:55.358 06:01:03 -- common/autotest_common.sh@852 -- # return 0 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:55.358 [2024-05-13 06:01:03.542190] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.358 [2024-05-13 06:01:03.542252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.358 [2024-05-13 06:01:03.542256] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.358 [2024-05-13 06:01:03.542262] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.358 [2024-05-13 06:01:03.542265] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.358 [2024-05-13 06:01:03.542270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.358 06:01:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.616 06:01:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:55.616 "name": "Existed_Raid", 00:08:55.616 "uuid": "2e377499-10ee-11ef-ba60-3508ead7bdda", 00:08:55.616 "strip_size_kb": 0, 00:08:55.616 "state": "configuring", 00:08:55.616 "raid_level": "raid1", 00:08:55.616 "superblock": true, 00:08:55.616 "num_base_bdevs": 3, 00:08:55.616 "num_base_bdevs_discovered": 0, 00:08:55.616 "num_base_bdevs_operational": 3, 00:08:55.616 "base_bdevs_list": [ 00:08:55.616 { 00:08:55.616 "name": "BaseBdev1", 00:08:55.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.616 "is_configured": false, 00:08:55.616 "data_offset": 0, 00:08:55.616 "data_size": 0 00:08:55.616 }, 00:08:55.616 { 00:08:55.616 "name": "BaseBdev2", 00:08:55.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.616 "is_configured": false, 00:08:55.616 "data_offset": 0, 00:08:55.616 "data_size": 0 00:08:55.616 }, 00:08:55.616 { 00:08:55.616 "name": "BaseBdev3", 00:08:55.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.616 "is_configured": false, 00:08:55.616 "data_offset": 0, 00:08:55.616 "data_size": 0 00:08:55.616 } 00:08:55.616 ] 00:08:55.616 }' 00:08:55.616 06:01:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:55.616 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:55.875 06:01:03 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:55.875 [2024-05-13 06:01:04.126268] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.875 [2024-05-13 06:01:04.126285] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b9ba500 name Existed_Raid, state configuring 00:08:55.875 06:01:04 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:56.133 [2024-05-13 06:01:04.298303] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:56.133 [2024-05-13 06:01:04.298338] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:56.133 [2024-05-13 06:01:04.298341] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:56.133 [2024-05-13 06:01:04.298346] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:56.133 [2024-05-13 06:01:04.298349] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:56.133 [2024-05-13 06:01:04.298354] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:56.133 06:01:04 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.424 [2024-05-13 06:01:04.463068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.424 BaseBdev1 00:08:56.424 06:01:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:08:56.424 06:01:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:56.424 06:01:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:56.424 06:01:04 -- common/autotest_common.sh@889 -- # local i 00:08:56.424 06:01:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:56.424 06:01:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:56.424 06:01:04 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:56.424 06:01:04 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.682 [ 00:08:56.682 { 00:08:56.682 "name": "BaseBdev1", 00:08:56.682 "aliases": [ 00:08:56.682 "2ec3dbe3-10ee-11ef-ba60-3508ead7bdda" 00:08:56.682 ], 00:08:56.682 "product_name": "Malloc disk", 00:08:56.682 "block_size": 512, 00:08:56.682 "num_blocks": 65536, 00:08:56.682 "uuid": "2ec3dbe3-10ee-11ef-ba60-3508ead7bdda", 00:08:56.682 "assigned_rate_limits": { 00:08:56.682 "rw_ios_per_sec": 0, 00:08:56.682 "rw_mbytes_per_sec": 0, 00:08:56.682 "r_mbytes_per_sec": 0, 00:08:56.682 "w_mbytes_per_sec": 0 00:08:56.683 }, 00:08:56.683 "claimed": true, 00:08:56.683 "claim_type": "exclusive_write", 00:08:56.683 "zoned": false, 00:08:56.683 "supported_io_types": { 00:08:56.683 "read": true, 00:08:56.683 "write": true, 00:08:56.683 "unmap": true, 00:08:56.683 "write_zeroes": true, 00:08:56.683 "flush": true, 00:08:56.683 "reset": true, 00:08:56.683 "compare": false, 00:08:56.683 "compare_and_write": false, 00:08:56.683 "abort": true, 00:08:56.683 "nvme_admin": false, 00:08:56.683 "nvme_io": false 00:08:56.683 }, 00:08:56.683 "memory_domains": [ 00:08:56.683 { 00:08:56.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.683 "dma_device_type": 2 00:08:56.683 } 00:08:56.683 ], 00:08:56.683 "driver_specific": {} 00:08:56.683 } 00:08:56.683 ] 00:08:56.683 06:01:04 -- common/autotest_common.sh@895 -- # return 0 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.683 06:01:04 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.942 06:01:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:56.942 "name": "Existed_Raid", 00:08:56.942 "uuid": "2eaad46d-10ee-11ef-ba60-3508ead7bdda", 00:08:56.942 "strip_size_kb": 0, 00:08:56.942 "state": "configuring", 00:08:56.942 "raid_level": "raid1", 00:08:56.942 "superblock": true, 00:08:56.942 "num_base_bdevs": 3, 00:08:56.942 "num_base_bdevs_discovered": 1, 00:08:56.942 "num_base_bdevs_operational": 3, 00:08:56.942 "base_bdevs_list": [ 00:08:56.942 { 00:08:56.942 "name": "BaseBdev1", 00:08:56.942 "uuid": "2ec3dbe3-10ee-11ef-ba60-3508ead7bdda", 00:08:56.942 "is_configured": true, 00:08:56.942 "data_offset": 2048, 00:08:56.942 "data_size": 63488 00:08:56.942 }, 00:08:56.942 { 00:08:56.942 "name": "BaseBdev2", 00:08:56.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.942 "is_configured": false, 00:08:56.942 "data_offset": 0, 00:08:56.942 "data_size": 0 00:08:56.942 }, 00:08:56.942 { 00:08:56.942 "name": "BaseBdev3", 00:08:56.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.942 "is_configured": false, 00:08:56.942 "data_offset": 0, 00:08:56.942 "data_size": 0 00:08:56.942 } 00:08:56.942 ] 00:08:56.942 }' 00:08:56.942 06:01:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:56.942 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:56.942 06:01:05 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:57.201 [2024-05-13 06:01:05.406485] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.201 [2024-05-13 06:01:05.406506] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b9ba500 name Existed_Raid, state configuring 00:08:57.201 06:01:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:08:57.201 06:01:05 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:57.460 06:01:05 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.460 BaseBdev1 00:08:57.460 06:01:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:08:57.460 06:01:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:08:57.460 06:01:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:57.460 06:01:05 -- common/autotest_common.sh@889 -- # local i 00:08:57.460 06:01:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:57.460 06:01:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:57.460 06:01:05 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:57.719 06:01:05 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.978 [ 00:08:57.978 { 00:08:57.978 "name": "BaseBdev1", 00:08:57.978 "aliases": [ 00:08:57.978 "2f88836f-10ee-11ef-ba60-3508ead7bdda" 00:08:57.978 ], 00:08:57.978 "product_name": "Malloc disk", 00:08:57.978 "block_size": 512, 00:08:57.978 "num_blocks": 65536, 00:08:57.978 "uuid": "2f88836f-10ee-11ef-ba60-3508ead7bdda", 00:08:57.978 "assigned_rate_limits": { 00:08:57.978 "rw_ios_per_sec": 0, 00:08:57.978 "rw_mbytes_per_sec": 0, 00:08:57.978 "r_mbytes_per_sec": 0, 00:08:57.978 "w_mbytes_per_sec": 0 00:08:57.978 }, 00:08:57.978 "claimed": false, 00:08:57.978 "zoned": false, 00:08:57.978 "supported_io_types": { 00:08:57.978 "read": true, 00:08:57.978 "write": true, 00:08:57.978 "unmap": true, 00:08:57.978 "write_zeroes": true, 00:08:57.978 "flush": true, 00:08:57.978 "reset": true, 00:08:57.978 "compare": false, 00:08:57.978 "compare_and_write": false, 00:08:57.978 "abort": true, 00:08:57.978 "nvme_admin": false, 00:08:57.978 "nvme_io": false 00:08:57.978 }, 00:08:57.978 "memory_domains": [ 00:08:57.978 { 00:08:57.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.978 "dma_device_type": 2 00:08:57.978 } 00:08:57.978 ], 00:08:57.978 "driver_specific": {} 00:08:57.978 } 00:08:57.978 ] 00:08:57.978 06:01:06 -- common/autotest_common.sh@895 -- # return 0 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:57.978 [2024-05-13 06:01:06.219213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.978 [2024-05-13 06:01:06.219617] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.978 [2024-05-13 06:01:06.219660] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.978 [2024-05-13 06:01:06.219664] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.978 [2024-05-13 06:01:06.219671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.978 06:01:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.238 06:01:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:58.238 "name": "Existed_Raid", 00:08:58.238 "uuid": "2fcfefdd-10ee-11ef-ba60-3508ead7bdda", 00:08:58.238 "strip_size_kb": 0, 00:08:58.238 "state": "configuring", 00:08:58.238 "raid_level": "raid1", 00:08:58.238 "superblock": true, 00:08:58.238 "num_base_bdevs": 3, 00:08:58.238 "num_base_bdevs_discovered": 1, 00:08:58.238 "num_base_bdevs_operational": 3, 00:08:58.238 "base_bdevs_list": [ 00:08:58.238 { 00:08:58.238 "name": "BaseBdev1", 00:08:58.238 "uuid": "2f88836f-10ee-11ef-ba60-3508ead7bdda", 00:08:58.238 "is_configured": true, 00:08:58.238 "data_offset": 2048, 00:08:58.238 "data_size": 63488 00:08:58.238 }, 00:08:58.238 { 00:08:58.238 "name": "BaseBdev2", 00:08:58.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.238 "is_configured": false, 00:08:58.238 "data_offset": 0, 00:08:58.238 "data_size": 0 00:08:58.238 }, 00:08:58.238 { 00:08:58.238 "name": "BaseBdev3", 00:08:58.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.238 "is_configured": false, 00:08:58.238 "data_offset": 0, 00:08:58.238 "data_size": 0 00:08:58.238 } 00:08:58.238 ] 00:08:58.238 }' 00:08:58.238 06:01:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:58.238 06:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:58.497 06:01:06 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.755 [2024-05-13 06:01:06.819394] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.755 BaseBdev2 00:08:58.755 06:01:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:08:58.755 06:01:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:08:58.755 06:01:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:58.755 06:01:06 -- common/autotest_common.sh@889 -- # local i 00:08:58.755 06:01:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:58.755 06:01:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:58.755 06:01:06 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:58.755 06:01:07 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.014 [ 00:08:59.014 { 00:08:59.014 "name": "BaseBdev2", 00:08:59.014 "aliases": [ 00:08:59.014 "302b8136-10ee-11ef-ba60-3508ead7bdda" 00:08:59.014 ], 00:08:59.014 "product_name": "Malloc disk", 00:08:59.014 "block_size": 512, 00:08:59.014 "num_blocks": 65536, 00:08:59.014 "uuid": "302b8136-10ee-11ef-ba60-3508ead7bdda", 00:08:59.014 "assigned_rate_limits": { 00:08:59.014 "rw_ios_per_sec": 0, 00:08:59.014 "rw_mbytes_per_sec": 0, 00:08:59.014 "r_mbytes_per_sec": 0, 00:08:59.014 "w_mbytes_per_sec": 0 00:08:59.014 }, 00:08:59.014 "claimed": true, 00:08:59.014 "claim_type": "exclusive_write", 00:08:59.014 "zoned": false, 00:08:59.014 "supported_io_types": { 00:08:59.014 "read": true, 00:08:59.014 "write": true, 00:08:59.014 "unmap": true, 00:08:59.014 "write_zeroes": true, 00:08:59.014 "flush": true, 00:08:59.014 "reset": true, 00:08:59.014 "compare": false, 00:08:59.014 "compare_and_write": false, 00:08:59.014 "abort": true, 00:08:59.014 "nvme_admin": false, 00:08:59.014 "nvme_io": false 00:08:59.014 }, 00:08:59.014 "memory_domains": [ 00:08:59.014 { 00:08:59.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.014 "dma_device_type": 2 00:08:59.014 } 00:08:59.014 ], 00:08:59.014 "driver_specific": {} 00:08:59.014 } 00:08:59.014 ] 00:08:59.014 06:01:07 -- common/autotest_common.sh@895 -- # return 0 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:59.014 06:01:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.273 06:01:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:08:59.273 "name": "Existed_Raid", 00:08:59.273 "uuid": "2fcfefdd-10ee-11ef-ba60-3508ead7bdda", 00:08:59.273 "strip_size_kb": 0, 00:08:59.273 "state": "configuring", 00:08:59.273 "raid_level": "raid1", 00:08:59.273 "superblock": true, 00:08:59.273 "num_base_bdevs": 3, 00:08:59.273 "num_base_bdevs_discovered": 2, 00:08:59.273 "num_base_bdevs_operational": 3, 00:08:59.273 "base_bdevs_list": [ 00:08:59.273 { 00:08:59.273 "name": "BaseBdev1", 00:08:59.273 "uuid": "2f88836f-10ee-11ef-ba60-3508ead7bdda", 00:08:59.273 "is_configured": true, 00:08:59.273 "data_offset": 2048, 00:08:59.273 "data_size": 63488 00:08:59.273 }, 00:08:59.273 { 00:08:59.273 "name": "BaseBdev2", 00:08:59.273 "uuid": "302b8136-10ee-11ef-ba60-3508ead7bdda", 00:08:59.273 "is_configured": true, 00:08:59.273 "data_offset": 2048, 00:08:59.273 "data_size": 63488 00:08:59.273 }, 00:08:59.273 { 00:08:59.273 "name": "BaseBdev3", 00:08:59.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.273 "is_configured": false, 00:08:59.273 "data_offset": 0, 00:08:59.273 "data_size": 0 00:08:59.273 } 00:08:59.273 ] 00:08:59.273 }' 00:08:59.273 06:01:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:08:59.273 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 06:01:07 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.532 [2024-05-13 06:01:07.759535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.532 [2024-05-13 06:01:07.759586] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b9baa00 00:08:59.532 [2024-05-13 06:01:07.759591] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:59.532 [2024-05-13 06:01:07.759606] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ba1dec0 00:08:59.532 [2024-05-13 06:01:07.759639] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b9baa00 00:08:59.532 [2024-05-13 06:01:07.759642] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b9baa00 00:08:59.532 [2024-05-13 06:01:07.759656] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.532 BaseBdev3 00:08:59.532 06:01:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:08:59.532 06:01:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:08:59.532 06:01:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:59.532 06:01:07 -- common/autotest_common.sh@889 -- # local i 00:08:59.532 06:01:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:59.532 06:01:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:59.532 06:01:07 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:59.791 06:01:07 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.050 [ 00:09:00.050 { 00:09:00.050 "name": "BaseBdev3", 00:09:00.050 "aliases": [ 00:09:00.050 "30baf663-10ee-11ef-ba60-3508ead7bdda" 00:09:00.050 ], 00:09:00.050 "product_name": "Malloc disk", 00:09:00.050 "block_size": 512, 00:09:00.050 "num_blocks": 65536, 00:09:00.050 "uuid": "30baf663-10ee-11ef-ba60-3508ead7bdda", 00:09:00.050 "assigned_rate_limits": { 00:09:00.050 "rw_ios_per_sec": 0, 00:09:00.050 "rw_mbytes_per_sec": 0, 00:09:00.050 "r_mbytes_per_sec": 0, 00:09:00.050 "w_mbytes_per_sec": 0 00:09:00.050 }, 00:09:00.050 "claimed": true, 00:09:00.050 "claim_type": "exclusive_write", 00:09:00.050 "zoned": false, 00:09:00.050 "supported_io_types": { 00:09:00.050 "read": true, 00:09:00.050 "write": true, 00:09:00.050 "unmap": true, 00:09:00.050 "write_zeroes": true, 00:09:00.050 "flush": true, 00:09:00.050 "reset": true, 00:09:00.050 "compare": false, 00:09:00.050 "compare_and_write": false, 00:09:00.050 "abort": true, 00:09:00.050 "nvme_admin": false, 00:09:00.050 "nvme_io": false 00:09:00.050 }, 00:09:00.050 "memory_domains": [ 00:09:00.050 { 00:09:00.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.050 "dma_device_type": 2 00:09:00.050 } 00:09:00.050 ], 00:09:00.050 "driver_specific": {} 00:09:00.050 } 00:09:00.050 ] 00:09:00.050 06:01:08 -- common/autotest_common.sh@895 -- # return 0 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:00.050 06:01:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:00.051 06:01:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:00.051 06:01:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:00.051 06:01:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.051 06:01:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.051 06:01:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:00.051 "name": "Existed_Raid", 00:09:00.051 "uuid": "2fcfefdd-10ee-11ef-ba60-3508ead7bdda", 00:09:00.051 "strip_size_kb": 0, 00:09:00.051 "state": "online", 00:09:00.051 "raid_level": "raid1", 00:09:00.051 "superblock": true, 00:09:00.051 "num_base_bdevs": 3, 00:09:00.051 "num_base_bdevs_discovered": 3, 00:09:00.051 "num_base_bdevs_operational": 3, 00:09:00.051 "base_bdevs_list": [ 00:09:00.051 { 00:09:00.051 "name": "BaseBdev1", 00:09:00.051 "uuid": "2f88836f-10ee-11ef-ba60-3508ead7bdda", 00:09:00.051 "is_configured": true, 00:09:00.051 "data_offset": 2048, 00:09:00.051 "data_size": 63488 00:09:00.051 }, 00:09:00.051 { 00:09:00.051 "name": "BaseBdev2", 00:09:00.051 "uuid": "302b8136-10ee-11ef-ba60-3508ead7bdda", 00:09:00.051 "is_configured": true, 00:09:00.051 "data_offset": 2048, 00:09:00.051 "data_size": 63488 00:09:00.051 }, 00:09:00.051 { 00:09:00.051 "name": "BaseBdev3", 00:09:00.051 "uuid": "30baf663-10ee-11ef-ba60-3508ead7bdda", 00:09:00.051 "is_configured": true, 00:09:00.051 "data_offset": 2048, 00:09:00.051 "data_size": 63488 00:09:00.051 } 00:09:00.051 ] 00:09:00.051 }' 00:09:00.051 06:01:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:00.051 06:01:08 -- common/autotest_common.sh@10 -- # set +x 00:09:00.309 06:01:08 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:00.568 [2024-05-13 06:01:08.703621] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.568 06:01:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.827 06:01:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:00.827 "name": "Existed_Raid", 00:09:00.827 "uuid": "2fcfefdd-10ee-11ef-ba60-3508ead7bdda", 00:09:00.827 "strip_size_kb": 0, 00:09:00.827 "state": "online", 00:09:00.827 "raid_level": "raid1", 00:09:00.827 "superblock": true, 00:09:00.827 "num_base_bdevs": 3, 00:09:00.827 "num_base_bdevs_discovered": 2, 00:09:00.827 "num_base_bdevs_operational": 2, 00:09:00.827 "base_bdevs_list": [ 00:09:00.827 { 00:09:00.827 "name": null, 00:09:00.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.827 "is_configured": false, 00:09:00.827 "data_offset": 2048, 00:09:00.827 "data_size": 63488 00:09:00.828 }, 00:09:00.828 { 00:09:00.828 "name": "BaseBdev2", 00:09:00.828 "uuid": "302b8136-10ee-11ef-ba60-3508ead7bdda", 00:09:00.828 "is_configured": true, 00:09:00.828 "data_offset": 2048, 00:09:00.828 "data_size": 63488 00:09:00.828 }, 00:09:00.828 { 00:09:00.828 "name": "BaseBdev3", 00:09:00.828 "uuid": "30baf663-10ee-11ef-ba60-3508ead7bdda", 00:09:00.828 "is_configured": true, 00:09:00.828 "data_offset": 2048, 00:09:00.828 "data_size": 63488 00:09:00.828 } 00:09:00.828 ] 00:09:00.828 }' 00:09:00.828 06:01:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:00.828 06:01:08 -- common/autotest_common.sh@10 -- # set +x 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.087 06:01:09 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:01.345 [2024-05-13 06:01:09.492504] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.345 06:01:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:01.345 06:01:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:01.345 06:01:09 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.345 06:01:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:01.603 [2024-05-13 06:01:09.825282] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.603 [2024-05-13 06:01:09.825298] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.603 [2024-05-13 06:01:09.825306] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.603 [2024-05-13 06:01:09.829978] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.603 [2024-05-13 06:01:09.829995] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b9baa00 name Existed_Raid, state offline 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.603 06:01:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.862 06:01:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:01.862 06:01:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:01.862 06:01:10 -- bdev/bdev_raid.sh@287 -- # killprocess 50818 00:09:01.862 06:01:10 -- common/autotest_common.sh@926 -- # '[' -z 50818 ']' 00:09:01.862 06:01:10 -- common/autotest_common.sh@930 -- # kill -0 50818 00:09:01.862 06:01:10 -- common/autotest_common.sh@931 -- # uname 00:09:01.862 06:01:10 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:01.862 06:01:10 -- common/autotest_common.sh@934 -- # ps -c -o command 50818 00:09:01.862 06:01:10 -- common/autotest_common.sh@934 -- # tail -1 00:09:01.862 06:01:10 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:01.862 06:01:10 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:01.862 killing process with pid 50818 00:09:01.862 06:01:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50818' 00:09:01.862 06:01:10 -- common/autotest_common.sh@945 -- # kill 50818 00:09:01.862 [2024-05-13 06:01:10.026905] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.862 [2024-05-13 06:01:10.026938] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.862 06:01:10 -- common/autotest_common.sh@950 -- # wait 50818 00:09:01.862 06:01:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:01.862 00:09:01.862 real 0m7.679s 00:09:01.862 user 0m13.211s 00:09:01.862 sys 0m1.446s 00:09:01.862 06:01:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.862 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:09:01.862 ************************************ 00:09:01.862 END TEST raid_state_function_test_sb 00:09:01.862 ************************************ 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:02.121 06:01:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:02.121 06:01:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.121 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 ************************************ 00:09:02.121 START TEST raid_superblock_test 00:09:02.121 ************************************ 00:09:02.121 06:01:10 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@357 -- # raid_pid=51054 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51054 /var/tmp/spdk-raid.sock 00:09:02.121 06:01:10 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:02.121 06:01:10 -- common/autotest_common.sh@819 -- # '[' -z 51054 ']' 00:09:02.121 06:01:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:02.121 06:01:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:02.121 06:01:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:02.121 06:01:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.121 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:09:02.121 [2024-05-13 06:01:10.237563] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:02.121 [2024-05-13 06:01:10.237941] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:02.379 EAL: TSC is not safe to use in SMP mode 00:09:02.379 EAL: TSC is not invariant 00:09:02.379 [2024-05-13 06:01:10.653959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.638 [2024-05-13 06:01:10.741102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.638 [2024-05-13 06:01:10.741514] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.638 [2024-05-13 06:01:10.741525] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.897 06:01:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:02.897 06:01:11 -- common/autotest_common.sh@852 -- # return 0 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:02.897 06:01:11 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:03.155 malloc1 00:09:03.155 06:01:11 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:03.155 [2024-05-13 06:01:11.420735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:03.155 [2024-05-13 06:01:11.420780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.155 [2024-05-13 06:01:11.421295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c706780 00:09:03.155 [2024-05-13 06:01:11.421326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.156 [2024-05-13 06:01:11.421965] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.156 [2024-05-13 06:01:11.422000] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:03.156 pt1 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.156 06:01:11 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:03.417 malloc2 00:09:03.417 06:01:11 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:03.676 [2024-05-13 06:01:11.764852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:03.676 [2024-05-13 06:01:11.764895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.676 [2024-05-13 06:01:11.764934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c706c80 00:09:03.676 [2024-05-13 06:01:11.764941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.676 [2024-05-13 06:01:11.765335] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.676 [2024-05-13 06:01:11.765364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:03.676 pt2 00:09:03.676 06:01:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:03.676 06:01:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:03.676 06:01:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:03.676 06:01:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:03.677 06:01:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:03.677 06:01:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:03.677 06:01:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:03.677 06:01:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:03.677 06:01:11 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:03.677 malloc3 00:09:03.677 06:01:11 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:03.942 [2024-05-13 06:01:12.080949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:03.942 [2024-05-13 06:01:12.080990] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.942 [2024-05-13 06:01:12.081027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707180 00:09:03.942 [2024-05-13 06:01:12.081034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.942 [2024-05-13 06:01:12.081427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.942 [2024-05-13 06:01:12.081459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:03.942 pt3 00:09:03.942 06:01:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:03.942 06:01:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:03.942 06:01:12 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:04.204 [2024-05-13 06:01:12.253010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:04.204 [2024-05-13 06:01:12.253359] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:04.204 [2024-05-13 06:01:12.253379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:04.204 [2024-05-13 06:01:12.253457] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c707400 00:09:04.204 [2024-05-13 06:01:12.253468] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:04.204 [2024-05-13 06:01:12.253494] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c769e20 00:09:04.204 [2024-05-13 06:01:12.253542] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c707400 00:09:04.204 [2024-05-13 06:01:12.253551] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c707400 00:09:04.204 [2024-05-13 06:01:12.253568] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:04.204 "name": "raid_bdev1", 00:09:04.204 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:04.204 "strip_size_kb": 0, 00:09:04.204 "state": "online", 00:09:04.204 "raid_level": "raid1", 00:09:04.204 "superblock": true, 00:09:04.204 "num_base_bdevs": 3, 00:09:04.204 "num_base_bdevs_discovered": 3, 00:09:04.204 "num_base_bdevs_operational": 3, 00:09:04.204 "base_bdevs_list": [ 00:09:04.204 { 00:09:04.204 "name": "pt1", 00:09:04.204 "uuid": "6b4de8f6-1a56-485a-8564-df7df4d9a834", 00:09:04.204 "is_configured": true, 00:09:04.204 "data_offset": 2048, 00:09:04.204 "data_size": 63488 00:09:04.204 }, 00:09:04.204 { 00:09:04.204 "name": "pt2", 00:09:04.204 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:04.204 "is_configured": true, 00:09:04.204 "data_offset": 2048, 00:09:04.204 "data_size": 63488 00:09:04.204 }, 00:09:04.204 { 00:09:04.204 "name": "pt3", 00:09:04.204 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:04.204 "is_configured": true, 00:09:04.204 "data_offset": 2048, 00:09:04.204 "data_size": 63488 00:09:04.204 } 00:09:04.204 ] 00:09:04.204 }' 00:09:04.204 06:01:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:04.204 06:01:12 -- common/autotest_common.sh@10 -- # set +x 00:09:04.463 06:01:12 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:04.463 06:01:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:04.723 [2024-05-13 06:01:12.837204] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.723 06:01:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=33689f12-10ee-11ef-ba60-3508ead7bdda 00:09:04.723 06:01:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 33689f12-10ee-11ef-ba60-3508ead7bdda ']' 00:09:04.723 06:01:12 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:04.723 [2024-05-13 06:01:13.009235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.723 [2024-05-13 06:01:13.009250] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.723 [2024-05-13 06:01:13.009266] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.723 [2024-05-13 06:01:13.009294] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.723 [2024-05-13 06:01:13.009298] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c707400 name raid_bdev1, state offline 00:09:04.723 06:01:13 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.723 06:01:13 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:04.982 06:01:13 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:04.982 06:01:13 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:04.982 06:01:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:04.982 06:01:13 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:05.241 06:01:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.241 06:01:13 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:05.241 06:01:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.241 06:01:13 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:05.500 06:01:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:05.500 06:01:13 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:05.759 06:01:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:05.759 06:01:13 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:05.759 06:01:13 -- common/autotest_common.sh@640 -- # local es=0 00:09:05.759 06:01:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:05.759 06:01:13 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.759 06:01:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:05.759 06:01:13 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.759 06:01:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:05.759 06:01:13 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.759 06:01:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:05.759 06:01:13 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.759 06:01:13 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:05.759 06:01:13 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:05.759 [2024-05-13 06:01:14.049580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:05.759 [2024-05-13 06:01:14.050025] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:05.759 [2024-05-13 06:01:14.050045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:05.759 [2024-05-13 06:01:14.050055] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:05.759 [2024-05-13 06:01:14.050087] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:05.759 [2024-05-13 06:01:14.050096] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:05.759 [2024-05-13 06:01:14.050119] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.759 [2024-05-13 06:01:14.050126] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c707180 name raid_bdev1, state configuring 00:09:05.759 request: 00:09:05.759 { 00:09:05.759 "name": "raid_bdev1", 00:09:05.759 "raid_level": "raid1", 00:09:05.759 "base_bdevs": [ 00:09:05.759 "malloc1", 00:09:05.759 "malloc2", 00:09:05.759 "malloc3" 00:09:05.759 ], 00:09:05.759 "superblock": false, 00:09:05.759 "method": "bdev_raid_create", 00:09:05.759 "req_id": 1 00:09:05.759 } 00:09:05.759 Got JSON-RPC error response 00:09:05.759 response: 00:09:05.759 { 00:09:05.759 "code": -17, 00:09:05.759 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:05.759 } 00:09:05.759 06:01:14 -- common/autotest_common.sh@643 -- # es=1 00:09:05.759 06:01:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:05.759 06:01:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:05.759 06:01:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:05.759 06:01:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:05.759 06:01:14 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.017 06:01:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:06.017 06:01:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:06.017 06:01:14 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:06.276 [2024-05-13 06:01:14.389683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:06.276 [2024-05-13 06:01:14.389722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.276 [2024-05-13 06:01:14.389762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c706c80 00:09:06.276 [2024-05-13 06:01:14.389769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.276 [2024-05-13 06:01:14.390240] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.276 [2024-05-13 06:01:14.390272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:06.276 [2024-05-13 06:01:14.390288] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:06.276 [2024-05-13 06:01:14.390297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:06.276 pt1 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:06.276 "name": "raid_bdev1", 00:09:06.276 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:06.276 "strip_size_kb": 0, 00:09:06.276 "state": "configuring", 00:09:06.276 "raid_level": "raid1", 00:09:06.276 "superblock": true, 00:09:06.276 "num_base_bdevs": 3, 00:09:06.276 "num_base_bdevs_discovered": 1, 00:09:06.276 "num_base_bdevs_operational": 3, 00:09:06.276 "base_bdevs_list": [ 00:09:06.276 { 00:09:06.276 "name": "pt1", 00:09:06.276 "uuid": "6b4de8f6-1a56-485a-8564-df7df4d9a834", 00:09:06.276 "is_configured": true, 00:09:06.276 "data_offset": 2048, 00:09:06.276 "data_size": 63488 00:09:06.276 }, 00:09:06.276 { 00:09:06.276 "name": null, 00:09:06.276 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:06.276 "is_configured": false, 00:09:06.276 "data_offset": 2048, 00:09:06.276 "data_size": 63488 00:09:06.276 }, 00:09:06.276 { 00:09:06.276 "name": null, 00:09:06.276 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:06.276 "is_configured": false, 00:09:06.276 "data_offset": 2048, 00:09:06.276 "data_size": 63488 00:09:06.276 } 00:09:06.276 ] 00:09:06.276 }' 00:09:06.276 06:01:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:06.276 06:01:14 -- common/autotest_common.sh@10 -- # set +x 00:09:06.538 06:01:14 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:09:06.538 06:01:14 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.796 [2024-05-13 06:01:15.001868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.796 [2024-05-13 06:01:15.001910] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.796 [2024-05-13 06:01:15.001948] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707680 00:09:06.796 [2024-05-13 06:01:15.001956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.796 [2024-05-13 06:01:15.002034] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.796 [2024-05-13 06:01:15.002042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.796 [2024-05-13 06:01:15.002056] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:06.796 [2024-05-13 06:01:15.002062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.796 pt2 00:09:06.796 06:01:15 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:07.055 [2024-05-13 06:01:15.173921] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:07.055 "name": "raid_bdev1", 00:09:07.055 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:07.055 "strip_size_kb": 0, 00:09:07.055 "state": "configuring", 00:09:07.055 "raid_level": "raid1", 00:09:07.055 "superblock": true, 00:09:07.055 "num_base_bdevs": 3, 00:09:07.055 "num_base_bdevs_discovered": 1, 00:09:07.055 "num_base_bdevs_operational": 3, 00:09:07.055 "base_bdevs_list": [ 00:09:07.055 { 00:09:07.055 "name": "pt1", 00:09:07.055 "uuid": "6b4de8f6-1a56-485a-8564-df7df4d9a834", 00:09:07.055 "is_configured": true, 00:09:07.055 "data_offset": 2048, 00:09:07.055 "data_size": 63488 00:09:07.055 }, 00:09:07.055 { 00:09:07.055 "name": null, 00:09:07.055 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:07.055 "is_configured": false, 00:09:07.055 "data_offset": 2048, 00:09:07.055 "data_size": 63488 00:09:07.055 }, 00:09:07.055 { 00:09:07.055 "name": null, 00:09:07.055 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:07.055 "is_configured": false, 00:09:07.055 "data_offset": 2048, 00:09:07.055 "data_size": 63488 00:09:07.055 } 00:09:07.055 ] 00:09:07.055 }' 00:09:07.055 06:01:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:07.055 06:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:07.314 06:01:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:07.314 06:01:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:07.314 06:01:15 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.573 [2024-05-13 06:01:15.774098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.573 [2024-05-13 06:01:15.774137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.573 [2024-05-13 06:01:15.774175] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707680 00:09:07.573 [2024-05-13 06:01:15.774188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.573 [2024-05-13 06:01:15.774256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.573 [2024-05-13 06:01:15.774263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.573 [2024-05-13 06:01:15.774276] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:07.573 [2024-05-13 06:01:15.774282] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.573 pt2 00:09:07.573 06:01:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:07.573 06:01:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:07.573 06:01:15 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:07.832 [2024-05-13 06:01:15.950149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:07.832 [2024-05-13 06:01:15.950181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.832 [2024-05-13 06:01:15.950195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707400 00:09:07.832 [2024-05-13 06:01:15.950201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.832 [2024-05-13 06:01:15.950270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.832 [2024-05-13 06:01:15.950277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:07.832 [2024-05-13 06:01:15.950289] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:07.832 [2024-05-13 06:01:15.950294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.832 [2024-05-13 06:01:15.950311] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c706780 00:09:07.832 [2024-05-13 06:01:15.950315] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:07.832 [2024-05-13 06:01:15.950328] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c769e20 00:09:07.832 [2024-05-13 06:01:15.950365] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c706780 00:09:07.832 [2024-05-13 06:01:15.950369] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c706780 00:09:07.832 [2024-05-13 06:01:15.950383] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.832 pt3 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.832 06:01:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:07.832 06:01:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:07.832 "name": "raid_bdev1", 00:09:07.832 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:07.833 "strip_size_kb": 0, 00:09:07.833 "state": "online", 00:09:07.833 "raid_level": "raid1", 00:09:07.833 "superblock": true, 00:09:07.833 "num_base_bdevs": 3, 00:09:07.833 "num_base_bdevs_discovered": 3, 00:09:07.833 "num_base_bdevs_operational": 3, 00:09:07.833 "base_bdevs_list": [ 00:09:07.833 { 00:09:07.833 "name": "pt1", 00:09:07.833 "uuid": "6b4de8f6-1a56-485a-8564-df7df4d9a834", 00:09:07.833 "is_configured": true, 00:09:07.833 "data_offset": 2048, 00:09:07.833 "data_size": 63488 00:09:07.833 }, 00:09:07.833 { 00:09:07.833 "name": "pt2", 00:09:07.833 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:07.833 "is_configured": true, 00:09:07.833 "data_offset": 2048, 00:09:07.833 "data_size": 63488 00:09:07.833 }, 00:09:07.833 { 00:09:07.833 "name": "pt3", 00:09:07.833 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:07.833 "is_configured": true, 00:09:07.833 "data_offset": 2048, 00:09:07.833 "data_size": 63488 00:09:07.833 } 00:09:07.833 ] 00:09:07.833 }' 00:09:07.833 06:01:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:07.833 06:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:08.093 06:01:16 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:08.093 06:01:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:08.354 [2024-05-13 06:01:16.558352] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.354 06:01:16 -- bdev/bdev_raid.sh@430 -- # '[' 33689f12-10ee-11ef-ba60-3508ead7bdda '!=' 33689f12-10ee-11ef-ba60-3508ead7bdda ']' 00:09:08.354 06:01:16 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:09:08.354 06:01:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:08.354 06:01:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:09:08.354 06:01:16 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:08.613 [2024-05-13 06:01:16.730383] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:08.614 "name": "raid_bdev1", 00:09:08.614 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:08.614 "strip_size_kb": 0, 00:09:08.614 "state": "online", 00:09:08.614 "raid_level": "raid1", 00:09:08.614 "superblock": true, 00:09:08.614 "num_base_bdevs": 3, 00:09:08.614 "num_base_bdevs_discovered": 2, 00:09:08.614 "num_base_bdevs_operational": 2, 00:09:08.614 "base_bdevs_list": [ 00:09:08.614 { 00:09:08.614 "name": null, 00:09:08.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.614 "is_configured": false, 00:09:08.614 "data_offset": 2048, 00:09:08.614 "data_size": 63488 00:09:08.614 }, 00:09:08.614 { 00:09:08.614 "name": "pt2", 00:09:08.614 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:08.614 "is_configured": true, 00:09:08.614 "data_offset": 2048, 00:09:08.614 "data_size": 63488 00:09:08.614 }, 00:09:08.614 { 00:09:08.614 "name": "pt3", 00:09:08.614 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:08.614 "is_configured": true, 00:09:08.614 "data_offset": 2048, 00:09:08.614 "data_size": 63488 00:09:08.614 } 00:09:08.614 ] 00:09:08.614 }' 00:09:08.614 06:01:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:08.614 06:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:08.872 06:01:17 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:09.130 [2024-05-13 06:01:17.318556] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.130 [2024-05-13 06:01:17.318569] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.130 [2024-05-13 06:01:17.318579] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.130 [2024-05-13 06:01:17.318588] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.130 [2024-05-13 06:01:17.318591] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c706780 name raid_bdev1, state offline 00:09:09.130 06:01:17 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.130 06:01:17 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:09:09.389 06:01:17 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:09:09.389 06:01:17 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:09:09.390 06:01:17 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:09:09.390 06:01:17 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:09.390 06:01:17 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:09.390 06:01:17 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:09.390 06:01:17 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:09.390 06:01:17 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:09.649 06:01:17 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:09:09.649 06:01:17 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:09:09.649 06:01:17 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:09:09.649 06:01:17 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:09.649 06:01:17 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.907 [2024-05-13 06:01:17.990746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.907 [2024-05-13 06:01:17.990785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.907 [2024-05-13 06:01:17.990824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707400 00:09:09.907 [2024-05-13 06:01:17.990831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.907 [2024-05-13 06:01:17.991327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.907 [2024-05-13 06:01:17.991359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.907 [2024-05-13 06:01:17.991377] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:09.907 [2024-05-13 06:01:17.991385] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.907 pt2 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:09.907 "name": "raid_bdev1", 00:09:09.907 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:09.907 "strip_size_kb": 0, 00:09:09.907 "state": "configuring", 00:09:09.907 "raid_level": "raid1", 00:09:09.907 "superblock": true, 00:09:09.907 "num_base_bdevs": 3, 00:09:09.907 "num_base_bdevs_discovered": 1, 00:09:09.907 "num_base_bdevs_operational": 2, 00:09:09.907 "base_bdevs_list": [ 00:09:09.907 { 00:09:09.907 "name": null, 00:09:09.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.907 "is_configured": false, 00:09:09.907 "data_offset": 2048, 00:09:09.907 "data_size": 63488 00:09:09.907 }, 00:09:09.907 { 00:09:09.907 "name": "pt2", 00:09:09.907 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:09.907 "is_configured": true, 00:09:09.907 "data_offset": 2048, 00:09:09.907 "data_size": 63488 00:09:09.907 }, 00:09:09.907 { 00:09:09.907 "name": null, 00:09:09.907 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:09.907 "is_configured": false, 00:09:09.907 "data_offset": 2048, 00:09:09.907 "data_size": 63488 00:09:09.907 } 00:09:09.907 ] 00:09:09.907 }' 00:09:09.907 06:01:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:09.907 06:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.185 06:01:18 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:09:10.185 06:01:18 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:09:10.185 06:01:18 -- bdev/bdev_raid.sh@462 -- # i=2 00:09:10.185 06:01:18 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.445 [2024-05-13 06:01:18.590936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.445 [2024-05-13 06:01:18.590975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.446 [2024-05-13 06:01:18.591013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c706780 00:09:10.446 [2024-05-13 06:01:18.591020] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.446 [2024-05-13 06:01:18.591093] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.446 [2024-05-13 06:01:18.591101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.446 [2024-05-13 06:01:18.591115] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:10.446 [2024-05-13 06:01:18.591120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.446 [2024-05-13 06:01:18.591139] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c707180 00:09:10.446 [2024-05-13 06:01:18.591142] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:10.446 [2024-05-13 06:01:18.591156] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c769e20 00:09:10.446 [2024-05-13 06:01:18.591185] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c707180 00:09:10.446 [2024-05-13 06:01:18.591188] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c707180 00:09:10.446 [2024-05-13 06:01:18.591209] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.446 pt3 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.446 06:01:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.707 06:01:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:10.707 "name": "raid_bdev1", 00:09:10.707 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:10.707 "strip_size_kb": 0, 00:09:10.707 "state": "online", 00:09:10.707 "raid_level": "raid1", 00:09:10.707 "superblock": true, 00:09:10.707 "num_base_bdevs": 3, 00:09:10.707 "num_base_bdevs_discovered": 2, 00:09:10.707 "num_base_bdevs_operational": 2, 00:09:10.707 "base_bdevs_list": [ 00:09:10.707 { 00:09:10.707 "name": null, 00:09:10.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.707 "is_configured": false, 00:09:10.707 "data_offset": 2048, 00:09:10.707 "data_size": 63488 00:09:10.707 }, 00:09:10.707 { 00:09:10.707 "name": "pt2", 00:09:10.707 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:10.707 "is_configured": true, 00:09:10.707 "data_offset": 2048, 00:09:10.707 "data_size": 63488 00:09:10.707 }, 00:09:10.707 { 00:09:10.707 "name": "pt3", 00:09:10.707 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:10.707 "is_configured": true, 00:09:10.707 "data_offset": 2048, 00:09:10.707 "data_size": 63488 00:09:10.707 } 00:09:10.707 ] 00:09:10.707 }' 00:09:10.707 06:01:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:10.707 06:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:10.965 06:01:19 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:09:10.965 06:01:19 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:10.965 [2024-05-13 06:01:19.207099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:10.965 [2024-05-13 06:01:19.207116] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.965 [2024-05-13 06:01:19.207127] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.965 [2024-05-13 06:01:19.207137] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.965 [2024-05-13 06:01:19.207140] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c707180 name raid_bdev1, state offline 00:09:10.965 06:01:19 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.965 06:01:19 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:09:11.223 06:01:19 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:09:11.223 06:01:19 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:09:11.223 06:01:19 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.223 [2024-05-13 06:01:19.531189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.223 [2024-05-13 06:01:19.531229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.223 [2024-05-13 06:01:19.531268] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707680 00:09:11.223 [2024-05-13 06:01:19.531275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.223 [2024-05-13 06:01:19.531760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.223 [2024-05-13 06:01:19.531793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.223 [2024-05-13 06:01:19.531811] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:11.223 [2024-05-13 06:01:19.531819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.484 pt1 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.484 06:01:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:11.484 "name": "raid_bdev1", 00:09:11.484 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:11.484 "strip_size_kb": 0, 00:09:11.484 "state": "configuring", 00:09:11.484 "raid_level": "raid1", 00:09:11.484 "superblock": true, 00:09:11.484 "num_base_bdevs": 3, 00:09:11.484 "num_base_bdevs_discovered": 1, 00:09:11.484 "num_base_bdevs_operational": 3, 00:09:11.484 "base_bdevs_list": [ 00:09:11.484 { 00:09:11.484 "name": "pt1", 00:09:11.484 "uuid": "6b4de8f6-1a56-485a-8564-df7df4d9a834", 00:09:11.484 "is_configured": true, 00:09:11.484 "data_offset": 2048, 00:09:11.484 "data_size": 63488 00:09:11.484 }, 00:09:11.484 { 00:09:11.484 "name": null, 00:09:11.484 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:11.484 "is_configured": false, 00:09:11.484 "data_offset": 2048, 00:09:11.484 "data_size": 63488 00:09:11.484 }, 00:09:11.484 { 00:09:11.484 "name": null, 00:09:11.484 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:11.484 "is_configured": false, 00:09:11.484 "data_offset": 2048, 00:09:11.485 "data_size": 63488 00:09:11.485 } 00:09:11.485 ] 00:09:11.485 }' 00:09:11.485 06:01:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:11.485 06:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:11.743 06:01:19 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:09:11.743 06:01:19 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:11.743 06:01:19 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:12.002 06:01:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:12.002 06:01:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:12.002 06:01:20 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@489 -- # i=2 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.262 [2024-05-13 06:01:20.479447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.262 [2024-05-13 06:01:20.479483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.262 [2024-05-13 06:01:20.479505] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c706780 00:09:12.262 [2024-05-13 06:01:20.479529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.262 [2024-05-13 06:01:20.479601] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.262 [2024-05-13 06:01:20.479608] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.262 [2024-05-13 06:01:20.479621] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:12.262 [2024-05-13 06:01:20.479627] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:12.262 [2024-05-13 06:01:20.479629] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:12.262 [2024-05-13 06:01:20.479633] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c706c80 name raid_bdev1, state configuring 00:09:12.262 [2024-05-13 06:01:20.479643] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.262 pt3 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.262 06:01:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.521 06:01:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:12.521 "name": "raid_bdev1", 00:09:12.521 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:12.521 "strip_size_kb": 0, 00:09:12.521 "state": "configuring", 00:09:12.521 "raid_level": "raid1", 00:09:12.521 "superblock": true, 00:09:12.521 "num_base_bdevs": 3, 00:09:12.521 "num_base_bdevs_discovered": 1, 00:09:12.521 "num_base_bdevs_operational": 2, 00:09:12.521 "base_bdevs_list": [ 00:09:12.521 { 00:09:12.521 "name": null, 00:09:12.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.522 "is_configured": false, 00:09:12.522 "data_offset": 2048, 00:09:12.522 "data_size": 63488 00:09:12.522 }, 00:09:12.522 { 00:09:12.522 "name": null, 00:09:12.522 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:12.522 "is_configured": false, 00:09:12.522 "data_offset": 2048, 00:09:12.522 "data_size": 63488 00:09:12.522 }, 00:09:12.522 { 00:09:12.522 "name": "pt3", 00:09:12.522 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:12.522 "is_configured": true, 00:09:12.522 "data_offset": 2048, 00:09:12.522 "data_size": 63488 00:09:12.522 } 00:09:12.522 ] 00:09:12.522 }' 00:09:12.522 06:01:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:12.522 06:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.779 06:01:20 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:09:12.779 06:01:20 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:09:12.779 06:01:20 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.779 [2024-05-13 06:01:21.063600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.779 [2024-05-13 06:01:21.063641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.779 [2024-05-13 06:01:21.063678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c707400 00:09:12.779 [2024-05-13 06:01:21.063684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.779 [2024-05-13 06:01:21.063749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.779 [2024-05-13 06:01:21.063756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.779 [2024-05-13 06:01:21.063781] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:12.779 [2024-05-13 06:01:21.063787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.779 [2024-05-13 06:01:21.063804] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c706c80 00:09:12.779 [2024-05-13 06:01:21.063807] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:12.779 [2024-05-13 06:01:21.063821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c769e20 00:09:12.779 [2024-05-13 06:01:21.063850] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c706c80 00:09:12.779 [2024-05-13 06:01:21.063853] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c706c80 00:09:12.779 [2024-05-13 06:01:21.063867] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.779 pt2 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.779 06:01:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.037 06:01:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:13.037 "name": "raid_bdev1", 00:09:13.037 "uuid": "33689f12-10ee-11ef-ba60-3508ead7bdda", 00:09:13.037 "strip_size_kb": 0, 00:09:13.037 "state": "online", 00:09:13.037 "raid_level": "raid1", 00:09:13.037 "superblock": true, 00:09:13.037 "num_base_bdevs": 3, 00:09:13.037 "num_base_bdevs_discovered": 2, 00:09:13.037 "num_base_bdevs_operational": 2, 00:09:13.037 "base_bdevs_list": [ 00:09:13.037 { 00:09:13.037 "name": null, 00:09:13.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.037 "is_configured": false, 00:09:13.037 "data_offset": 2048, 00:09:13.037 "data_size": 63488 00:09:13.037 }, 00:09:13.037 { 00:09:13.037 "name": "pt2", 00:09:13.037 "uuid": "3bb1b4f9-c6c4-6c53-bcf9-caf249b1ed0f", 00:09:13.037 "is_configured": true, 00:09:13.037 "data_offset": 2048, 00:09:13.037 "data_size": 63488 00:09:13.037 }, 00:09:13.037 { 00:09:13.037 "name": "pt3", 00:09:13.037 "uuid": "4ade819b-6393-f456-b4f2-6422a60b56b8", 00:09:13.037 "is_configured": true, 00:09:13.037 "data_offset": 2048, 00:09:13.037 "data_size": 63488 00:09:13.037 } 00:09:13.037 ] 00:09:13.037 }' 00:09:13.037 06:01:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:13.037 06:01:21 -- common/autotest_common.sh@10 -- # set +x 00:09:13.296 06:01:21 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:13.296 06:01:21 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:09:13.555 [2024-05-13 06:01:21.671792] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.555 06:01:21 -- bdev/bdev_raid.sh@506 -- # '[' 33689f12-10ee-11ef-ba60-3508ead7bdda '!=' 33689f12-10ee-11ef-ba60-3508ead7bdda ']' 00:09:13.555 06:01:21 -- bdev/bdev_raid.sh@511 -- # killprocess 51054 00:09:13.555 06:01:21 -- common/autotest_common.sh@926 -- # '[' -z 51054 ']' 00:09:13.555 06:01:21 -- common/autotest_common.sh@930 -- # kill -0 51054 00:09:13.555 06:01:21 -- common/autotest_common.sh@931 -- # uname 00:09:13.555 06:01:21 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:13.555 06:01:21 -- common/autotest_common.sh@934 -- # ps -c -o command 51054 00:09:13.555 06:01:21 -- common/autotest_common.sh@934 -- # tail -1 00:09:13.555 06:01:21 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:13.555 06:01:21 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:13.555 killing process with pid 51054 00:09:13.555 06:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51054' 00:09:13.555 06:01:21 -- common/autotest_common.sh@945 -- # kill 51054 00:09:13.555 [2024-05-13 06:01:21.702700] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.555 [2024-05-13 06:01:21.702714] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.555 [2024-05-13 06:01:21.702733] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.555 [2024-05-13 06:01:21.702737] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c706c80 name raid_bdev1, state offline 00:09:13.555 06:01:21 -- common/autotest_common.sh@950 -- # wait 51054 00:09:13.555 [2024-05-13 06:01:21.716643] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.555 06:01:21 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:13.555 00:09:13.555 real 0m11.629s 00:09:13.555 user 0m20.598s 00:09:13.555 sys 0m1.991s 00:09:13.555 06:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.555 06:01:21 -- common/autotest_common.sh@10 -- # set +x 00:09:13.555 ************************************ 00:09:13.555 END TEST raid_superblock_test 00:09:13.555 ************************************ 00:09:13.814 06:01:21 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:09:13.814 06:01:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:13.814 06:01:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:13.814 06:01:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:13.814 06:01:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.814 06:01:21 -- common/autotest_common.sh@10 -- # set +x 00:09:13.814 ************************************ 00:09:13.814 START TEST raid_state_function_test 00:09:13.814 ************************************ 00:09:13.814 06:01:21 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=51436 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51436' 00:09:13.815 Process raid pid: 51436 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51436 /var/tmp/spdk-raid.sock 00:09:13.815 06:01:21 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:13.815 06:01:21 -- common/autotest_common.sh@819 -- # '[' -z 51436 ']' 00:09:13.815 06:01:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:13.815 06:01:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:13.815 06:01:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:13.815 06:01:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.815 06:01:21 -- common/autotest_common.sh@10 -- # set +x 00:09:13.815 [2024-05-13 06:01:21.936645] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:13.815 [2024-05-13 06:01:21.936989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:14.074 EAL: TSC is not safe to use in SMP mode 00:09:14.074 EAL: TSC is not invariant 00:09:14.074 [2024-05-13 06:01:22.354817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.333 [2024-05-13 06:01:22.440188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.333 [2024-05-13 06:01:22.440599] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.333 [2024-05-13 06:01:22.440610] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.591 06:01:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.591 06:01:22 -- common/autotest_common.sh@852 -- # return 0 00:09:14.591 06:01:22 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:14.851 [2024-05-13 06:01:22.971742] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.851 [2024-05-13 06:01:22.971789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.851 [2024-05-13 06:01:22.971793] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.851 [2024-05-13 06:01:22.971799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.851 [2024-05-13 06:01:22.971801] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.851 [2024-05-13 06:01:22.971807] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.851 [2024-05-13 06:01:22.971825] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.851 [2024-05-13 06:01:22.971831] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.851 06:01:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.851 06:01:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:14.851 "name": "Existed_Raid", 00:09:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.851 "strip_size_kb": 64, 00:09:14.851 "state": "configuring", 00:09:14.851 "raid_level": "raid0", 00:09:14.851 "superblock": false, 00:09:14.851 "num_base_bdevs": 4, 00:09:14.851 "num_base_bdevs_discovered": 0, 00:09:14.851 "num_base_bdevs_operational": 4, 00:09:14.851 "base_bdevs_list": [ 00:09:14.851 { 00:09:14.851 "name": "BaseBdev1", 00:09:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.851 "is_configured": false, 00:09:14.851 "data_offset": 0, 00:09:14.851 "data_size": 0 00:09:14.851 }, 00:09:14.851 { 00:09:14.851 "name": "BaseBdev2", 00:09:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.851 "is_configured": false, 00:09:14.851 "data_offset": 0, 00:09:14.851 "data_size": 0 00:09:14.851 }, 00:09:14.851 { 00:09:14.851 "name": "BaseBdev3", 00:09:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.851 "is_configured": false, 00:09:14.851 "data_offset": 0, 00:09:14.851 "data_size": 0 00:09:14.851 }, 00:09:14.851 { 00:09:14.851 "name": "BaseBdev4", 00:09:14.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.851 "is_configured": false, 00:09:14.851 "data_offset": 0, 00:09:14.851 "data_size": 0 00:09:14.851 } 00:09:14.851 ] 00:09:14.851 }' 00:09:14.851 06:01:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:14.851 06:01:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.112 06:01:23 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:15.374 [2024-05-13 06:01:23.567872] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.374 [2024-05-13 06:01:23.567890] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b44f500 name Existed_Raid, state configuring 00:09:15.374 06:01:23 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:15.632 [2024-05-13 06:01:23.739916] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:15.632 [2024-05-13 06:01:23.739951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:15.632 [2024-05-13 06:01:23.739954] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:15.632 [2024-05-13 06:01:23.739959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:15.632 [2024-05-13 06:01:23.739961] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:15.632 [2024-05-13 06:01:23.739966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:15.632 [2024-05-13 06:01:23.739969] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:15.632 [2024-05-13 06:01:23.739974] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:15.632 06:01:23 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.632 [2024-05-13 06:01:23.888701] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.632 BaseBdev1 00:09:15.632 06:01:23 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:15.632 06:01:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:15.632 06:01:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:15.632 06:01:23 -- common/autotest_common.sh@889 -- # local i 00:09:15.632 06:01:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:15.632 06:01:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:15.632 06:01:23 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:15.890 06:01:24 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:16.150 [ 00:09:16.150 { 00:09:16.150 "name": "BaseBdev1", 00:09:16.150 "aliases": [ 00:09:16.150 "3a57f991-10ee-11ef-ba60-3508ead7bdda" 00:09:16.150 ], 00:09:16.150 "product_name": "Malloc disk", 00:09:16.150 "block_size": 512, 00:09:16.150 "num_blocks": 65536, 00:09:16.150 "uuid": "3a57f991-10ee-11ef-ba60-3508ead7bdda", 00:09:16.150 "assigned_rate_limits": { 00:09:16.150 "rw_ios_per_sec": 0, 00:09:16.150 "rw_mbytes_per_sec": 0, 00:09:16.150 "r_mbytes_per_sec": 0, 00:09:16.150 "w_mbytes_per_sec": 0 00:09:16.150 }, 00:09:16.150 "claimed": true, 00:09:16.150 "claim_type": "exclusive_write", 00:09:16.150 "zoned": false, 00:09:16.150 "supported_io_types": { 00:09:16.150 "read": true, 00:09:16.150 "write": true, 00:09:16.150 "unmap": true, 00:09:16.150 "write_zeroes": true, 00:09:16.150 "flush": true, 00:09:16.150 "reset": true, 00:09:16.150 "compare": false, 00:09:16.150 "compare_and_write": false, 00:09:16.150 "abort": true, 00:09:16.150 "nvme_admin": false, 00:09:16.150 "nvme_io": false 00:09:16.150 }, 00:09:16.150 "memory_domains": [ 00:09:16.150 { 00:09:16.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.150 "dma_device_type": 2 00:09:16.150 } 00:09:16.150 ], 00:09:16.150 "driver_specific": {} 00:09:16.150 } 00:09:16.150 ] 00:09:16.150 06:01:24 -- common/autotest_common.sh@895 -- # return 0 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:16.150 "name": "Existed_Raid", 00:09:16.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.150 "strip_size_kb": 64, 00:09:16.150 "state": "configuring", 00:09:16.150 "raid_level": "raid0", 00:09:16.150 "superblock": false, 00:09:16.150 "num_base_bdevs": 4, 00:09:16.150 "num_base_bdevs_discovered": 1, 00:09:16.150 "num_base_bdevs_operational": 4, 00:09:16.150 "base_bdevs_list": [ 00:09:16.150 { 00:09:16.150 "name": "BaseBdev1", 00:09:16.150 "uuid": "3a57f991-10ee-11ef-ba60-3508ead7bdda", 00:09:16.150 "is_configured": true, 00:09:16.150 "data_offset": 0, 00:09:16.150 "data_size": 65536 00:09:16.150 }, 00:09:16.150 { 00:09:16.150 "name": "BaseBdev2", 00:09:16.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.150 "is_configured": false, 00:09:16.150 "data_offset": 0, 00:09:16.150 "data_size": 0 00:09:16.150 }, 00:09:16.150 { 00:09:16.150 "name": "BaseBdev3", 00:09:16.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.150 "is_configured": false, 00:09:16.150 "data_offset": 0, 00:09:16.150 "data_size": 0 00:09:16.150 }, 00:09:16.150 { 00:09:16.150 "name": "BaseBdev4", 00:09:16.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.150 "is_configured": false, 00:09:16.150 "data_offset": 0, 00:09:16.150 "data_size": 0 00:09:16.150 } 00:09:16.150 ] 00:09:16.150 }' 00:09:16.150 06:01:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:16.150 06:01:24 -- common/autotest_common.sh@10 -- # set +x 00:09:16.408 06:01:24 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:16.667 [2024-05-13 06:01:24.808175] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.667 [2024-05-13 06:01:24.808195] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b44f500 name Existed_Raid, state configuring 00:09:16.667 06:01:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:16.667 06:01:24 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:16.926 [2024-05-13 06:01:24.980223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.926 [2024-05-13 06:01:24.980844] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.926 [2024-05-13 06:01:24.980888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.926 [2024-05-13 06:01:24.980891] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.926 [2024-05-13 06:01:24.980909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.926 [2024-05-13 06:01:24.980912] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:16.926 [2024-05-13 06:01:24.980918] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.926 06:01:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.926 06:01:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:16.926 "name": "Existed_Raid", 00:09:16.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.926 "strip_size_kb": 64, 00:09:16.926 "state": "configuring", 00:09:16.926 "raid_level": "raid0", 00:09:16.926 "superblock": false, 00:09:16.926 "num_base_bdevs": 4, 00:09:16.926 "num_base_bdevs_discovered": 1, 00:09:16.926 "num_base_bdevs_operational": 4, 00:09:16.926 "base_bdevs_list": [ 00:09:16.926 { 00:09:16.926 "name": "BaseBdev1", 00:09:16.926 "uuid": "3a57f991-10ee-11ef-ba60-3508ead7bdda", 00:09:16.926 "is_configured": true, 00:09:16.926 "data_offset": 0, 00:09:16.926 "data_size": 65536 00:09:16.926 }, 00:09:16.926 { 00:09:16.926 "name": "BaseBdev2", 00:09:16.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.926 "is_configured": false, 00:09:16.926 "data_offset": 0, 00:09:16.926 "data_size": 0 00:09:16.926 }, 00:09:16.926 { 00:09:16.926 "name": "BaseBdev3", 00:09:16.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.926 "is_configured": false, 00:09:16.926 "data_offset": 0, 00:09:16.926 "data_size": 0 00:09:16.926 }, 00:09:16.926 { 00:09:16.926 "name": "BaseBdev4", 00:09:16.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.926 "is_configured": false, 00:09:16.926 "data_offset": 0, 00:09:16.926 "data_size": 0 00:09:16.926 } 00:09:16.926 ] 00:09:16.926 }' 00:09:16.926 06:01:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:16.926 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:09:17.191 06:01:25 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.451 [2024-05-13 06:01:25.592469] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.451 BaseBdev2 00:09:17.451 06:01:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:17.451 06:01:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:17.451 06:01:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:17.451 06:01:25 -- common/autotest_common.sh@889 -- # local i 00:09:17.451 06:01:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:17.451 06:01:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:17.451 06:01:25 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.451 06:01:25 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.710 [ 00:09:17.710 { 00:09:17.710 "name": "BaseBdev2", 00:09:17.710 "aliases": [ 00:09:17.710 "3b5c0c68-10ee-11ef-ba60-3508ead7bdda" 00:09:17.710 ], 00:09:17.710 "product_name": "Malloc disk", 00:09:17.710 "block_size": 512, 00:09:17.710 "num_blocks": 65536, 00:09:17.710 "uuid": "3b5c0c68-10ee-11ef-ba60-3508ead7bdda", 00:09:17.710 "assigned_rate_limits": { 00:09:17.710 "rw_ios_per_sec": 0, 00:09:17.710 "rw_mbytes_per_sec": 0, 00:09:17.710 "r_mbytes_per_sec": 0, 00:09:17.710 "w_mbytes_per_sec": 0 00:09:17.710 }, 00:09:17.710 "claimed": true, 00:09:17.710 "claim_type": "exclusive_write", 00:09:17.710 "zoned": false, 00:09:17.710 "supported_io_types": { 00:09:17.710 "read": true, 00:09:17.710 "write": true, 00:09:17.710 "unmap": true, 00:09:17.710 "write_zeroes": true, 00:09:17.710 "flush": true, 00:09:17.710 "reset": true, 00:09:17.710 "compare": false, 00:09:17.710 "compare_and_write": false, 00:09:17.710 "abort": true, 00:09:17.710 "nvme_admin": false, 00:09:17.710 "nvme_io": false 00:09:17.710 }, 00:09:17.710 "memory_domains": [ 00:09:17.710 { 00:09:17.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.710 "dma_device_type": 2 00:09:17.710 } 00:09:17.710 ], 00:09:17.710 "driver_specific": {} 00:09:17.710 } 00:09:17.710 ] 00:09:17.710 06:01:25 -- common/autotest_common.sh@895 -- # return 0 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.710 06:01:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.969 06:01:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:17.969 "name": "Existed_Raid", 00:09:17.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.969 "strip_size_kb": 64, 00:09:17.969 "state": "configuring", 00:09:17.969 "raid_level": "raid0", 00:09:17.969 "superblock": false, 00:09:17.969 "num_base_bdevs": 4, 00:09:17.969 "num_base_bdevs_discovered": 2, 00:09:17.969 "num_base_bdevs_operational": 4, 00:09:17.969 "base_bdevs_list": [ 00:09:17.969 { 00:09:17.969 "name": "BaseBdev1", 00:09:17.969 "uuid": "3a57f991-10ee-11ef-ba60-3508ead7bdda", 00:09:17.969 "is_configured": true, 00:09:17.969 "data_offset": 0, 00:09:17.969 "data_size": 65536 00:09:17.969 }, 00:09:17.969 { 00:09:17.969 "name": "BaseBdev2", 00:09:17.969 "uuid": "3b5c0c68-10ee-11ef-ba60-3508ead7bdda", 00:09:17.969 "is_configured": true, 00:09:17.969 "data_offset": 0, 00:09:17.969 "data_size": 65536 00:09:17.969 }, 00:09:17.969 { 00:09:17.969 "name": "BaseBdev3", 00:09:17.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.969 "is_configured": false, 00:09:17.969 "data_offset": 0, 00:09:17.969 "data_size": 0 00:09:17.969 }, 00:09:17.969 { 00:09:17.969 "name": "BaseBdev4", 00:09:17.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.969 "is_configured": false, 00:09:17.969 "data_offset": 0, 00:09:17.969 "data_size": 0 00:09:17.969 } 00:09:17.969 ] 00:09:17.969 }' 00:09:17.969 06:01:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:17.969 06:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:18.229 06:01:26 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.229 [2024-05-13 06:01:26.496661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.229 BaseBdev3 00:09:18.229 06:01:26 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:18.229 06:01:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:18.229 06:01:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:18.229 06:01:26 -- common/autotest_common.sh@889 -- # local i 00:09:18.229 06:01:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:18.229 06:01:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:18.229 06:01:26 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.487 06:01:26 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.746 [ 00:09:18.746 { 00:09:18.746 "name": "BaseBdev3", 00:09:18.746 "aliases": [ 00:09:18.746 "3be60576-10ee-11ef-ba60-3508ead7bdda" 00:09:18.746 ], 00:09:18.746 "product_name": "Malloc disk", 00:09:18.746 "block_size": 512, 00:09:18.746 "num_blocks": 65536, 00:09:18.746 "uuid": "3be60576-10ee-11ef-ba60-3508ead7bdda", 00:09:18.746 "assigned_rate_limits": { 00:09:18.746 "rw_ios_per_sec": 0, 00:09:18.746 "rw_mbytes_per_sec": 0, 00:09:18.746 "r_mbytes_per_sec": 0, 00:09:18.746 "w_mbytes_per_sec": 0 00:09:18.746 }, 00:09:18.746 "claimed": true, 00:09:18.746 "claim_type": "exclusive_write", 00:09:18.746 "zoned": false, 00:09:18.746 "supported_io_types": { 00:09:18.746 "read": true, 00:09:18.746 "write": true, 00:09:18.746 "unmap": true, 00:09:18.746 "write_zeroes": true, 00:09:18.746 "flush": true, 00:09:18.746 "reset": true, 00:09:18.746 "compare": false, 00:09:18.746 "compare_and_write": false, 00:09:18.746 "abort": true, 00:09:18.746 "nvme_admin": false, 00:09:18.746 "nvme_io": false 00:09:18.746 }, 00:09:18.746 "memory_domains": [ 00:09:18.746 { 00:09:18.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.746 "dma_device_type": 2 00:09:18.746 } 00:09:18.746 ], 00:09:18.746 "driver_specific": {} 00:09:18.746 } 00:09:18.746 ] 00:09:18.746 06:01:26 -- common/autotest_common.sh@895 -- # return 0 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.746 06:01:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.746 06:01:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:18.746 "name": "Existed_Raid", 00:09:18.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.746 "strip_size_kb": 64, 00:09:18.746 "state": "configuring", 00:09:18.746 "raid_level": "raid0", 00:09:18.746 "superblock": false, 00:09:18.746 "num_base_bdevs": 4, 00:09:18.746 "num_base_bdevs_discovered": 3, 00:09:18.746 "num_base_bdevs_operational": 4, 00:09:18.746 "base_bdevs_list": [ 00:09:18.746 { 00:09:18.746 "name": "BaseBdev1", 00:09:18.746 "uuid": "3a57f991-10ee-11ef-ba60-3508ead7bdda", 00:09:18.746 "is_configured": true, 00:09:18.746 "data_offset": 0, 00:09:18.746 "data_size": 65536 00:09:18.746 }, 00:09:18.746 { 00:09:18.746 "name": "BaseBdev2", 00:09:18.746 "uuid": "3b5c0c68-10ee-11ef-ba60-3508ead7bdda", 00:09:18.746 "is_configured": true, 00:09:18.746 "data_offset": 0, 00:09:18.746 "data_size": 65536 00:09:18.746 }, 00:09:18.746 { 00:09:18.746 "name": "BaseBdev3", 00:09:18.746 "uuid": "3be60576-10ee-11ef-ba60-3508ead7bdda", 00:09:18.746 "is_configured": true, 00:09:18.746 "data_offset": 0, 00:09:18.746 "data_size": 65536 00:09:18.746 }, 00:09:18.746 { 00:09:18.746 "name": "BaseBdev4", 00:09:18.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.746 "is_configured": false, 00:09:18.746 "data_offset": 0, 00:09:18.746 "data_size": 0 00:09:18.746 } 00:09:18.746 ] 00:09:18.746 }' 00:09:18.746 06:01:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:18.746 06:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:19.005 06:01:27 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:19.273 [2024-05-13 06:01:27.424867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:19.273 [2024-05-13 06:01:27.424886] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b44fa00 00:09:19.273 [2024-05-13 06:01:27.424889] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:19.273 [2024-05-13 06:01:27.424911] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b4b2ec0 00:09:19.273 [2024-05-13 06:01:27.424983] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b44fa00 00:09:19.273 [2024-05-13 06:01:27.424986] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b44fa00 00:09:19.273 [2024-05-13 06:01:27.425025] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.273 BaseBdev4 00:09:19.273 06:01:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:19.273 06:01:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:19.273 06:01:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:19.273 06:01:27 -- common/autotest_common.sh@889 -- # local i 00:09:19.273 06:01:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:19.273 06:01:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:19.273 06:01:27 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:19.570 06:01:27 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:19.570 [ 00:09:19.570 { 00:09:19.570 "name": "BaseBdev4", 00:09:19.570 "aliases": [ 00:09:19.570 "3c73a798-10ee-11ef-ba60-3508ead7bdda" 00:09:19.570 ], 00:09:19.570 "product_name": "Malloc disk", 00:09:19.570 "block_size": 512, 00:09:19.570 "num_blocks": 65536, 00:09:19.570 "uuid": "3c73a798-10ee-11ef-ba60-3508ead7bdda", 00:09:19.570 "assigned_rate_limits": { 00:09:19.570 "rw_ios_per_sec": 0, 00:09:19.570 "rw_mbytes_per_sec": 0, 00:09:19.570 "r_mbytes_per_sec": 0, 00:09:19.570 "w_mbytes_per_sec": 0 00:09:19.570 }, 00:09:19.570 "claimed": true, 00:09:19.570 "claim_type": "exclusive_write", 00:09:19.570 "zoned": false, 00:09:19.570 "supported_io_types": { 00:09:19.570 "read": true, 00:09:19.570 "write": true, 00:09:19.570 "unmap": true, 00:09:19.570 "write_zeroes": true, 00:09:19.570 "flush": true, 00:09:19.570 "reset": true, 00:09:19.570 "compare": false, 00:09:19.570 "compare_and_write": false, 00:09:19.570 "abort": true, 00:09:19.570 "nvme_admin": false, 00:09:19.570 "nvme_io": false 00:09:19.570 }, 00:09:19.570 "memory_domains": [ 00:09:19.570 { 00:09:19.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.570 "dma_device_type": 2 00:09:19.570 } 00:09:19.570 ], 00:09:19.570 "driver_specific": {} 00:09:19.570 } 00:09:19.570 ] 00:09:19.570 06:01:27 -- common/autotest_common.sh@895 -- # return 0 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.570 06:01:27 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:19.846 06:01:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:19.846 "name": "Existed_Raid", 00:09:19.846 "uuid": "3c73aae8-10ee-11ef-ba60-3508ead7bdda", 00:09:19.846 "strip_size_kb": 64, 00:09:19.846 "state": "online", 00:09:19.846 "raid_level": "raid0", 00:09:19.846 "superblock": false, 00:09:19.846 "num_base_bdevs": 4, 00:09:19.846 "num_base_bdevs_discovered": 4, 00:09:19.846 "num_base_bdevs_operational": 4, 00:09:19.846 "base_bdevs_list": [ 00:09:19.846 { 00:09:19.846 "name": "BaseBdev1", 00:09:19.846 "uuid": "3a57f991-10ee-11ef-ba60-3508ead7bdda", 00:09:19.846 "is_configured": true, 00:09:19.846 "data_offset": 0, 00:09:19.846 "data_size": 65536 00:09:19.846 }, 00:09:19.846 { 00:09:19.846 "name": "BaseBdev2", 00:09:19.846 "uuid": "3b5c0c68-10ee-11ef-ba60-3508ead7bdda", 00:09:19.846 "is_configured": true, 00:09:19.846 "data_offset": 0, 00:09:19.846 "data_size": 65536 00:09:19.846 }, 00:09:19.846 { 00:09:19.846 "name": "BaseBdev3", 00:09:19.846 "uuid": "3be60576-10ee-11ef-ba60-3508ead7bdda", 00:09:19.846 "is_configured": true, 00:09:19.846 "data_offset": 0, 00:09:19.846 "data_size": 65536 00:09:19.846 }, 00:09:19.846 { 00:09:19.847 "name": "BaseBdev4", 00:09:19.847 "uuid": "3c73a798-10ee-11ef-ba60-3508ead7bdda", 00:09:19.847 "is_configured": true, 00:09:19.847 "data_offset": 0, 00:09:19.847 "data_size": 65536 00:09:19.847 } 00:09:19.847 ] 00:09:19.847 }' 00:09:19.847 06:01:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:19.847 06:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:20.106 [2024-05-13 06:01:28.341025] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.106 [2024-05-13 06:01:28.341043] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.106 [2024-05-13 06:01:28.341052] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.106 06:01:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.365 06:01:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:20.365 "name": "Existed_Raid", 00:09:20.365 "uuid": "3c73aae8-10ee-11ef-ba60-3508ead7bdda", 00:09:20.365 "strip_size_kb": 64, 00:09:20.365 "state": "offline", 00:09:20.365 "raid_level": "raid0", 00:09:20.365 "superblock": false, 00:09:20.365 "num_base_bdevs": 4, 00:09:20.365 "num_base_bdevs_discovered": 3, 00:09:20.365 "num_base_bdevs_operational": 3, 00:09:20.365 "base_bdevs_list": [ 00:09:20.365 { 00:09:20.365 "name": null, 00:09:20.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.365 "is_configured": false, 00:09:20.365 "data_offset": 0, 00:09:20.365 "data_size": 65536 00:09:20.365 }, 00:09:20.365 { 00:09:20.365 "name": "BaseBdev2", 00:09:20.365 "uuid": "3b5c0c68-10ee-11ef-ba60-3508ead7bdda", 00:09:20.365 "is_configured": true, 00:09:20.365 "data_offset": 0, 00:09:20.365 "data_size": 65536 00:09:20.365 }, 00:09:20.365 { 00:09:20.365 "name": "BaseBdev3", 00:09:20.365 "uuid": "3be60576-10ee-11ef-ba60-3508ead7bdda", 00:09:20.365 "is_configured": true, 00:09:20.365 "data_offset": 0, 00:09:20.365 "data_size": 65536 00:09:20.365 }, 00:09:20.365 { 00:09:20.365 "name": "BaseBdev4", 00:09:20.365 "uuid": "3c73a798-10ee-11ef-ba60-3508ead7bdda", 00:09:20.365 "is_configured": true, 00:09:20.365 "data_offset": 0, 00:09:20.365 "data_size": 65536 00:09:20.365 } 00:09:20.365 ] 00:09:20.365 }' 00:09:20.365 06:01:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:20.365 06:01:28 -- common/autotest_common.sh@10 -- # set +x 00:09:20.624 06:01:28 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:20.624 06:01:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:20.624 06:01:28 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.624 06:01:28 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:20.884 06:01:28 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:20.884 06:01:28 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:20.884 06:01:28 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:20.884 [2024-05-13 06:01:29.129856] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:20.884 06:01:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:20.884 06:01:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:20.884 06:01:29 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.884 06:01:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:21.145 06:01:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:21.145 06:01:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.145 06:01:29 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:21.405 [2024-05-13 06:01:29.482580] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.405 06:01:29 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:21.664 [2024-05-13 06:01:29.803300] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:21.664 [2024-05-13 06:01:29.803319] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b44fa00 name Existed_Raid, state offline 00:09:21.664 06:01:29 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:21.664 06:01:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:21.664 06:01:29 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.664 06:01:29 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.923 06:01:29 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:21.923 06:01:29 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:21.923 06:01:29 -- bdev/bdev_raid.sh@287 -- # killprocess 51436 00:09:21.923 06:01:29 -- common/autotest_common.sh@926 -- # '[' -z 51436 ']' 00:09:21.923 06:01:29 -- common/autotest_common.sh@930 -- # kill -0 51436 00:09:21.923 06:01:29 -- common/autotest_common.sh@931 -- # uname 00:09:21.923 06:01:30 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:21.923 06:01:30 -- common/autotest_common.sh@934 -- # ps -c -o command 51436 00:09:21.923 06:01:30 -- common/autotest_common.sh@934 -- # tail -1 00:09:21.923 killing process with pid 51436 00:09:21.923 06:01:30 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:21.923 06:01:30 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:21.923 06:01:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51436' 00:09:21.923 06:01:30 -- common/autotest_common.sh@945 -- # kill 51436 00:09:21.923 [2024-05-13 06:01:30.006856] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.923 06:01:30 -- common/autotest_common.sh@950 -- # wait 51436 00:09:21.923 [2024-05-13 06:01:30.006881] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:21.923 00:09:21.923 real 0m8.227s 00:09:21.923 user 0m14.197s 00:09:21.923 sys 0m1.576s 00:09:21.923 06:01:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.923 06:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.923 ************************************ 00:09:21.923 END TEST raid_state_function_test 00:09:21.923 ************************************ 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:21.923 06:01:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:21.923 06:01:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.923 06:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.923 ************************************ 00:09:21.923 START TEST raid_state_function_test_sb 00:09:21.923 ************************************ 00:09:21.923 06:01:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=51706 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51706' 00:09:21.923 Process raid pid: 51706 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:21.923 06:01:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51706 /var/tmp/spdk-raid.sock 00:09:21.923 06:01:30 -- common/autotest_common.sh@819 -- # '[' -z 51706 ']' 00:09:21.923 06:01:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:21.923 06:01:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:21.923 06:01:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:21.923 06:01:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.923 06:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:21.923 [2024-05-13 06:01:30.224137] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:21.923 [2024-05-13 06:01:30.224480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:22.491 EAL: TSC is not safe to use in SMP mode 00:09:22.491 EAL: TSC is not invariant 00:09:22.491 [2024-05-13 06:01:30.641240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.491 [2024-05-13 06:01:30.726436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.491 [2024-05-13 06:01:30.726849] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.491 [2024-05-13 06:01:30.726859] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.149 06:01:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.149 06:01:31 -- common/autotest_common.sh@852 -- # return 0 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:23.149 [2024-05-13 06:01:31.249984] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.149 [2024-05-13 06:01:31.250026] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.149 [2024-05-13 06:01:31.250030] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.149 [2024-05-13 06:01:31.250036] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.149 [2024-05-13 06:01:31.250039] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.149 [2024-05-13 06:01:31.250044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.149 [2024-05-13 06:01:31.250046] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:23.149 [2024-05-13 06:01:31.250068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:23.149 "name": "Existed_Raid", 00:09:23.149 "uuid": "3ebb544e-10ee-11ef-ba60-3508ead7bdda", 00:09:23.149 "strip_size_kb": 64, 00:09:23.149 "state": "configuring", 00:09:23.149 "raid_level": "raid0", 00:09:23.149 "superblock": true, 00:09:23.149 "num_base_bdevs": 4, 00:09:23.149 "num_base_bdevs_discovered": 0, 00:09:23.149 "num_base_bdevs_operational": 4, 00:09:23.149 "base_bdevs_list": [ 00:09:23.149 { 00:09:23.149 "name": "BaseBdev1", 00:09:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.149 "is_configured": false, 00:09:23.149 "data_offset": 0, 00:09:23.149 "data_size": 0 00:09:23.149 }, 00:09:23.149 { 00:09:23.149 "name": "BaseBdev2", 00:09:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.149 "is_configured": false, 00:09:23.149 "data_offset": 0, 00:09:23.149 "data_size": 0 00:09:23.149 }, 00:09:23.149 { 00:09:23.149 "name": "BaseBdev3", 00:09:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.149 "is_configured": false, 00:09:23.149 "data_offset": 0, 00:09:23.149 "data_size": 0 00:09:23.149 }, 00:09:23.149 { 00:09:23.149 "name": "BaseBdev4", 00:09:23.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.149 "is_configured": false, 00:09:23.149 "data_offset": 0, 00:09:23.149 "data_size": 0 00:09:23.149 } 00:09:23.149 ] 00:09:23.149 }' 00:09:23.149 06:01:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:23.149 06:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:23.412 06:01:31 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:23.671 [2024-05-13 06:01:31.854078] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.671 [2024-05-13 06:01:31.854094] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82de17500 name Existed_Raid, state configuring 00:09:23.671 06:01:31 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:23.930 [2024-05-13 06:01:32.026120] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.930 [2024-05-13 06:01:32.026150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.930 [2024-05-13 06:01:32.026153] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.930 [2024-05-13 06:01:32.026159] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.930 [2024-05-13 06:01:32.026161] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.930 [2024-05-13 06:01:32.026166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.930 [2024-05-13 06:01:32.026168] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:23.930 [2024-05-13 06:01:32.026173] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:23.930 06:01:32 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.930 [2024-05-13 06:01:32.198889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.930 BaseBdev1 00:09:23.930 06:01:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:23.930 06:01:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:23.930 06:01:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:23.930 06:01:32 -- common/autotest_common.sh@889 -- # local i 00:09:23.930 06:01:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:23.930 06:01:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:23.930 06:01:32 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:24.189 06:01:32 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.448 [ 00:09:24.448 { 00:09:24.448 "name": "BaseBdev1", 00:09:24.448 "aliases": [ 00:09:24.448 "3f4c0274-10ee-11ef-ba60-3508ead7bdda" 00:09:24.448 ], 00:09:24.448 "product_name": "Malloc disk", 00:09:24.448 "block_size": 512, 00:09:24.448 "num_blocks": 65536, 00:09:24.448 "uuid": "3f4c0274-10ee-11ef-ba60-3508ead7bdda", 00:09:24.448 "assigned_rate_limits": { 00:09:24.448 "rw_ios_per_sec": 0, 00:09:24.448 "rw_mbytes_per_sec": 0, 00:09:24.448 "r_mbytes_per_sec": 0, 00:09:24.448 "w_mbytes_per_sec": 0 00:09:24.448 }, 00:09:24.448 "claimed": true, 00:09:24.448 "claim_type": "exclusive_write", 00:09:24.448 "zoned": false, 00:09:24.448 "supported_io_types": { 00:09:24.448 "read": true, 00:09:24.448 "write": true, 00:09:24.448 "unmap": true, 00:09:24.448 "write_zeroes": true, 00:09:24.448 "flush": true, 00:09:24.448 "reset": true, 00:09:24.448 "compare": false, 00:09:24.448 "compare_and_write": false, 00:09:24.448 "abort": true, 00:09:24.448 "nvme_admin": false, 00:09:24.448 "nvme_io": false 00:09:24.448 }, 00:09:24.448 "memory_domains": [ 00:09:24.448 { 00:09:24.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.448 "dma_device_type": 2 00:09:24.448 } 00:09:24.448 ], 00:09:24.448 "driver_specific": {} 00:09:24.448 } 00:09:24.448 ] 00:09:24.448 06:01:32 -- common/autotest_common.sh@895 -- # return 0 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:24.448 06:01:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:24.449 "name": "Existed_Raid", 00:09:24.449 "uuid": "3f31c246-10ee-11ef-ba60-3508ead7bdda", 00:09:24.449 "strip_size_kb": 64, 00:09:24.449 "state": "configuring", 00:09:24.449 "raid_level": "raid0", 00:09:24.449 "superblock": true, 00:09:24.449 "num_base_bdevs": 4, 00:09:24.449 "num_base_bdevs_discovered": 1, 00:09:24.449 "num_base_bdevs_operational": 4, 00:09:24.449 "base_bdevs_list": [ 00:09:24.449 { 00:09:24.449 "name": "BaseBdev1", 00:09:24.449 "uuid": "3f4c0274-10ee-11ef-ba60-3508ead7bdda", 00:09:24.449 "is_configured": true, 00:09:24.449 "data_offset": 2048, 00:09:24.449 "data_size": 63488 00:09:24.449 }, 00:09:24.449 { 00:09:24.449 "name": "BaseBdev2", 00:09:24.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.449 "is_configured": false, 00:09:24.449 "data_offset": 0, 00:09:24.449 "data_size": 0 00:09:24.449 }, 00:09:24.449 { 00:09:24.449 "name": "BaseBdev3", 00:09:24.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.449 "is_configured": false, 00:09:24.449 "data_offset": 0, 00:09:24.449 "data_size": 0 00:09:24.449 }, 00:09:24.449 { 00:09:24.449 "name": "BaseBdev4", 00:09:24.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.449 "is_configured": false, 00:09:24.449 "data_offset": 0, 00:09:24.449 "data_size": 0 00:09:24.449 } 00:09:24.449 ] 00:09:24.449 }' 00:09:24.449 06:01:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:24.449 06:01:32 -- common/autotest_common.sh@10 -- # set +x 00:09:24.708 06:01:32 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:24.966 [2024-05-13 06:01:33.130348] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.966 [2024-05-13 06:01:33.130368] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82de17500 name Existed_Raid, state configuring 00:09:24.966 06:01:33 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:24.966 06:01:33 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:25.224 06:01:33 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:25.224 BaseBdev1 00:09:25.224 06:01:33 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:25.224 06:01:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:25.224 06:01:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:25.224 06:01:33 -- common/autotest_common.sh@889 -- # local i 00:09:25.224 06:01:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:25.224 06:01:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:25.224 06:01:33 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:25.483 06:01:33 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:25.483 [ 00:09:25.483 { 00:09:25.483 "name": "BaseBdev1", 00:09:25.483 "aliases": [ 00:09:25.483 "400b2d60-10ee-11ef-ba60-3508ead7bdda" 00:09:25.483 ], 00:09:25.483 "product_name": "Malloc disk", 00:09:25.483 "block_size": 512, 00:09:25.483 "num_blocks": 65536, 00:09:25.483 "uuid": "400b2d60-10ee-11ef-ba60-3508ead7bdda", 00:09:25.483 "assigned_rate_limits": { 00:09:25.483 "rw_ios_per_sec": 0, 00:09:25.483 "rw_mbytes_per_sec": 0, 00:09:25.483 "r_mbytes_per_sec": 0, 00:09:25.483 "w_mbytes_per_sec": 0 00:09:25.483 }, 00:09:25.483 "claimed": false, 00:09:25.483 "zoned": false, 00:09:25.483 "supported_io_types": { 00:09:25.483 "read": true, 00:09:25.483 "write": true, 00:09:25.483 "unmap": true, 00:09:25.483 "write_zeroes": true, 00:09:25.483 "flush": true, 00:09:25.483 "reset": true, 00:09:25.483 "compare": false, 00:09:25.483 "compare_and_write": false, 00:09:25.483 "abort": true, 00:09:25.483 "nvme_admin": false, 00:09:25.483 "nvme_io": false 00:09:25.483 }, 00:09:25.483 "memory_domains": [ 00:09:25.483 { 00:09:25.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.483 "dma_device_type": 2 00:09:25.483 } 00:09:25.483 ], 00:09:25.483 "driver_specific": {} 00:09:25.483 } 00:09:25.483 ] 00:09:25.742 06:01:33 -- common/autotest_common.sh@895 -- # return 0 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:25.742 [2024-05-13 06:01:33.947099] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.742 [2024-05-13 06:01:33.947531] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:25.742 [2024-05-13 06:01:33.947573] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:25.742 [2024-05-13 06:01:33.947582] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:25.742 [2024-05-13 06:01:33.947589] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:25.742 [2024-05-13 06:01:33.947592] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:25.742 [2024-05-13 06:01:33.947597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.742 06:01:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.001 06:01:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:26.001 "name": "Existed_Raid", 00:09:26.001 "uuid": "4056e066-10ee-11ef-ba60-3508ead7bdda", 00:09:26.001 "strip_size_kb": 64, 00:09:26.001 "state": "configuring", 00:09:26.001 "raid_level": "raid0", 00:09:26.001 "superblock": true, 00:09:26.001 "num_base_bdevs": 4, 00:09:26.001 "num_base_bdevs_discovered": 1, 00:09:26.001 "num_base_bdevs_operational": 4, 00:09:26.001 "base_bdevs_list": [ 00:09:26.001 { 00:09:26.001 "name": "BaseBdev1", 00:09:26.001 "uuid": "400b2d60-10ee-11ef-ba60-3508ead7bdda", 00:09:26.001 "is_configured": true, 00:09:26.001 "data_offset": 2048, 00:09:26.001 "data_size": 63488 00:09:26.001 }, 00:09:26.001 { 00:09:26.001 "name": "BaseBdev2", 00:09:26.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.001 "is_configured": false, 00:09:26.001 "data_offset": 0, 00:09:26.001 "data_size": 0 00:09:26.001 }, 00:09:26.001 { 00:09:26.001 "name": "BaseBdev3", 00:09:26.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.001 "is_configured": false, 00:09:26.001 "data_offset": 0, 00:09:26.001 "data_size": 0 00:09:26.001 }, 00:09:26.001 { 00:09:26.001 "name": "BaseBdev4", 00:09:26.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.001 "is_configured": false, 00:09:26.001 "data_offset": 0, 00:09:26.001 "data_size": 0 00:09:26.001 } 00:09:26.001 ] 00:09:26.001 }' 00:09:26.001 06:01:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:26.001 06:01:34 -- common/autotest_common.sh@10 -- # set +x 00:09:26.260 06:01:34 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.260 [2024-05-13 06:01:34.555313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.260 BaseBdev2 00:09:26.260 06:01:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:26.260 06:01:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:26.260 06:01:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:26.260 06:01:34 -- common/autotest_common.sh@889 -- # local i 00:09:26.260 06:01:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:26.260 06:01:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:26.260 06:01:34 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:26.519 06:01:34 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.778 [ 00:09:26.778 { 00:09:26.778 "name": "BaseBdev2", 00:09:26.778 "aliases": [ 00:09:26.778 "40b3ab90-10ee-11ef-ba60-3508ead7bdda" 00:09:26.778 ], 00:09:26.778 "product_name": "Malloc disk", 00:09:26.778 "block_size": 512, 00:09:26.778 "num_blocks": 65536, 00:09:26.778 "uuid": "40b3ab90-10ee-11ef-ba60-3508ead7bdda", 00:09:26.778 "assigned_rate_limits": { 00:09:26.778 "rw_ios_per_sec": 0, 00:09:26.778 "rw_mbytes_per_sec": 0, 00:09:26.778 "r_mbytes_per_sec": 0, 00:09:26.778 "w_mbytes_per_sec": 0 00:09:26.778 }, 00:09:26.778 "claimed": true, 00:09:26.778 "claim_type": "exclusive_write", 00:09:26.778 "zoned": false, 00:09:26.778 "supported_io_types": { 00:09:26.778 "read": true, 00:09:26.778 "write": true, 00:09:26.778 "unmap": true, 00:09:26.778 "write_zeroes": true, 00:09:26.778 "flush": true, 00:09:26.778 "reset": true, 00:09:26.778 "compare": false, 00:09:26.778 "compare_and_write": false, 00:09:26.778 "abort": true, 00:09:26.778 "nvme_admin": false, 00:09:26.778 "nvme_io": false 00:09:26.778 }, 00:09:26.778 "memory_domains": [ 00:09:26.778 { 00:09:26.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.778 "dma_device_type": 2 00:09:26.778 } 00:09:26.778 ], 00:09:26.778 "driver_specific": {} 00:09:26.778 } 00:09:26.778 ] 00:09:26.779 06:01:34 -- common/autotest_common.sh@895 -- # return 0 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.779 06:01:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.779 06:01:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:26.779 "name": "Existed_Raid", 00:09:26.779 "uuid": "4056e066-10ee-11ef-ba60-3508ead7bdda", 00:09:26.779 "strip_size_kb": 64, 00:09:26.779 "state": "configuring", 00:09:26.779 "raid_level": "raid0", 00:09:26.779 "superblock": true, 00:09:26.779 "num_base_bdevs": 4, 00:09:26.779 "num_base_bdevs_discovered": 2, 00:09:26.779 "num_base_bdevs_operational": 4, 00:09:26.779 "base_bdevs_list": [ 00:09:26.779 { 00:09:26.779 "name": "BaseBdev1", 00:09:26.779 "uuid": "400b2d60-10ee-11ef-ba60-3508ead7bdda", 00:09:26.779 "is_configured": true, 00:09:26.779 "data_offset": 2048, 00:09:26.779 "data_size": 63488 00:09:26.779 }, 00:09:26.779 { 00:09:26.779 "name": "BaseBdev2", 00:09:26.779 "uuid": "40b3ab90-10ee-11ef-ba60-3508ead7bdda", 00:09:26.779 "is_configured": true, 00:09:26.779 "data_offset": 2048, 00:09:26.779 "data_size": 63488 00:09:26.779 }, 00:09:26.779 { 00:09:26.779 "name": "BaseBdev3", 00:09:26.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.779 "is_configured": false, 00:09:26.779 "data_offset": 0, 00:09:26.779 "data_size": 0 00:09:26.779 }, 00:09:26.779 { 00:09:26.779 "name": "BaseBdev4", 00:09:26.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.779 "is_configured": false, 00:09:26.779 "data_offset": 0, 00:09:26.779 "data_size": 0 00:09:26.779 } 00:09:26.779 ] 00:09:26.779 }' 00:09:26.779 06:01:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:26.779 06:01:35 -- common/autotest_common.sh@10 -- # set +x 00:09:27.039 06:01:35 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.300 [2024-05-13 06:01:35.503475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.300 BaseBdev3 00:09:27.300 06:01:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:27.300 06:01:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:27.300 06:01:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:27.300 06:01:35 -- common/autotest_common.sh@889 -- # local i 00:09:27.300 06:01:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:27.300 06:01:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:27.300 06:01:35 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:27.559 06:01:35 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.559 [ 00:09:27.559 { 00:09:27.559 "name": "BaseBdev3", 00:09:27.559 "aliases": [ 00:09:27.559 "414459f0-10ee-11ef-ba60-3508ead7bdda" 00:09:27.559 ], 00:09:27.559 "product_name": "Malloc disk", 00:09:27.559 "block_size": 512, 00:09:27.559 "num_blocks": 65536, 00:09:27.559 "uuid": "414459f0-10ee-11ef-ba60-3508ead7bdda", 00:09:27.559 "assigned_rate_limits": { 00:09:27.559 "rw_ios_per_sec": 0, 00:09:27.559 "rw_mbytes_per_sec": 0, 00:09:27.559 "r_mbytes_per_sec": 0, 00:09:27.559 "w_mbytes_per_sec": 0 00:09:27.559 }, 00:09:27.559 "claimed": true, 00:09:27.559 "claim_type": "exclusive_write", 00:09:27.559 "zoned": false, 00:09:27.559 "supported_io_types": { 00:09:27.559 "read": true, 00:09:27.559 "write": true, 00:09:27.559 "unmap": true, 00:09:27.559 "write_zeroes": true, 00:09:27.559 "flush": true, 00:09:27.559 "reset": true, 00:09:27.559 "compare": false, 00:09:27.559 "compare_and_write": false, 00:09:27.559 "abort": true, 00:09:27.559 "nvme_admin": false, 00:09:27.559 "nvme_io": false 00:09:27.559 }, 00:09:27.559 "memory_domains": [ 00:09:27.559 { 00:09:27.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.559 "dma_device_type": 2 00:09:27.559 } 00:09:27.559 ], 00:09:27.559 "driver_specific": {} 00:09:27.559 } 00:09:27.559 ] 00:09:27.559 06:01:35 -- common/autotest_common.sh@895 -- # return 0 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.559 06:01:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.817 06:01:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:27.817 "name": "Existed_Raid", 00:09:27.817 "uuid": "4056e066-10ee-11ef-ba60-3508ead7bdda", 00:09:27.817 "strip_size_kb": 64, 00:09:27.817 "state": "configuring", 00:09:27.817 "raid_level": "raid0", 00:09:27.817 "superblock": true, 00:09:27.817 "num_base_bdevs": 4, 00:09:27.817 "num_base_bdevs_discovered": 3, 00:09:27.817 "num_base_bdevs_operational": 4, 00:09:27.817 "base_bdevs_list": [ 00:09:27.817 { 00:09:27.817 "name": "BaseBdev1", 00:09:27.817 "uuid": "400b2d60-10ee-11ef-ba60-3508ead7bdda", 00:09:27.817 "is_configured": true, 00:09:27.817 "data_offset": 2048, 00:09:27.817 "data_size": 63488 00:09:27.817 }, 00:09:27.817 { 00:09:27.817 "name": "BaseBdev2", 00:09:27.817 "uuid": "40b3ab90-10ee-11ef-ba60-3508ead7bdda", 00:09:27.817 "is_configured": true, 00:09:27.817 "data_offset": 2048, 00:09:27.817 "data_size": 63488 00:09:27.817 }, 00:09:27.817 { 00:09:27.817 "name": "BaseBdev3", 00:09:27.817 "uuid": "414459f0-10ee-11ef-ba60-3508ead7bdda", 00:09:27.817 "is_configured": true, 00:09:27.817 "data_offset": 2048, 00:09:27.817 "data_size": 63488 00:09:27.817 }, 00:09:27.817 { 00:09:27.817 "name": "BaseBdev4", 00:09:27.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.817 "is_configured": false, 00:09:27.817 "data_offset": 0, 00:09:27.817 "data_size": 0 00:09:27.817 } 00:09:27.817 ] 00:09:27.817 }' 00:09:27.817 06:01:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:27.817 06:01:36 -- common/autotest_common.sh@10 -- # set +x 00:09:28.076 06:01:36 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:28.338 [2024-05-13 06:01:36.431659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:28.338 [2024-05-13 06:01:36.431728] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82de17a00 00:09:28.338 [2024-05-13 06:01:36.431733] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:28.338 [2024-05-13 06:01:36.431748] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82de7aec0 00:09:28.338 [2024-05-13 06:01:36.431784] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82de17a00 00:09:28.338 [2024-05-13 06:01:36.431787] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82de17a00 00:09:28.338 [2024-05-13 06:01:36.431802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.338 BaseBdev4 00:09:28.338 06:01:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:28.338 06:01:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:28.338 06:01:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:28.338 06:01:36 -- common/autotest_common.sh@889 -- # local i 00:09:28.338 06:01:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:28.338 06:01:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:28.338 06:01:36 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:28.338 06:01:36 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:28.598 [ 00:09:28.598 { 00:09:28.598 "name": "BaseBdev4", 00:09:28.598 "aliases": [ 00:09:28.598 "41d1fb6b-10ee-11ef-ba60-3508ead7bdda" 00:09:28.598 ], 00:09:28.598 "product_name": "Malloc disk", 00:09:28.598 "block_size": 512, 00:09:28.598 "num_blocks": 65536, 00:09:28.598 "uuid": "41d1fb6b-10ee-11ef-ba60-3508ead7bdda", 00:09:28.598 "assigned_rate_limits": { 00:09:28.598 "rw_ios_per_sec": 0, 00:09:28.598 "rw_mbytes_per_sec": 0, 00:09:28.598 "r_mbytes_per_sec": 0, 00:09:28.598 "w_mbytes_per_sec": 0 00:09:28.598 }, 00:09:28.598 "claimed": true, 00:09:28.598 "claim_type": "exclusive_write", 00:09:28.598 "zoned": false, 00:09:28.598 "supported_io_types": { 00:09:28.598 "read": true, 00:09:28.598 "write": true, 00:09:28.598 "unmap": true, 00:09:28.598 "write_zeroes": true, 00:09:28.598 "flush": true, 00:09:28.598 "reset": true, 00:09:28.598 "compare": false, 00:09:28.598 "compare_and_write": false, 00:09:28.598 "abort": true, 00:09:28.598 "nvme_admin": false, 00:09:28.598 "nvme_io": false 00:09:28.598 }, 00:09:28.598 "memory_domains": [ 00:09:28.598 { 00:09:28.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.598 "dma_device_type": 2 00:09:28.598 } 00:09:28.598 ], 00:09:28.598 "driver_specific": {} 00:09:28.598 } 00:09:28.598 ] 00:09:28.598 06:01:36 -- common/autotest_common.sh@895 -- # return 0 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.598 06:01:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.857 06:01:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:28.857 "name": "Existed_Raid", 00:09:28.857 "uuid": "4056e066-10ee-11ef-ba60-3508ead7bdda", 00:09:28.857 "strip_size_kb": 64, 00:09:28.857 "state": "online", 00:09:28.857 "raid_level": "raid0", 00:09:28.857 "superblock": true, 00:09:28.857 "num_base_bdevs": 4, 00:09:28.857 "num_base_bdevs_discovered": 4, 00:09:28.857 "num_base_bdevs_operational": 4, 00:09:28.857 "base_bdevs_list": [ 00:09:28.857 { 00:09:28.857 "name": "BaseBdev1", 00:09:28.857 "uuid": "400b2d60-10ee-11ef-ba60-3508ead7bdda", 00:09:28.857 "is_configured": true, 00:09:28.857 "data_offset": 2048, 00:09:28.857 "data_size": 63488 00:09:28.857 }, 00:09:28.857 { 00:09:28.857 "name": "BaseBdev2", 00:09:28.857 "uuid": "40b3ab90-10ee-11ef-ba60-3508ead7bdda", 00:09:28.857 "is_configured": true, 00:09:28.857 "data_offset": 2048, 00:09:28.857 "data_size": 63488 00:09:28.857 }, 00:09:28.857 { 00:09:28.857 "name": "BaseBdev3", 00:09:28.857 "uuid": "414459f0-10ee-11ef-ba60-3508ead7bdda", 00:09:28.857 "is_configured": true, 00:09:28.857 "data_offset": 2048, 00:09:28.857 "data_size": 63488 00:09:28.857 }, 00:09:28.857 { 00:09:28.857 "name": "BaseBdev4", 00:09:28.857 "uuid": "41d1fb6b-10ee-11ef-ba60-3508ead7bdda", 00:09:28.857 "is_configured": true, 00:09:28.857 "data_offset": 2048, 00:09:28.857 "data_size": 63488 00:09:28.857 } 00:09:28.857 ] 00:09:28.857 }' 00:09:28.857 06:01:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:28.857 06:01:36 -- common/autotest_common.sh@10 -- # set +x 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:29.117 [2024-05-13 06:01:37.343806] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.117 [2024-05-13 06:01:37.343825] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.117 [2024-05-13 06:01:37.343840] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.117 06:01:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.376 06:01:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:29.376 "name": "Existed_Raid", 00:09:29.376 "uuid": "4056e066-10ee-11ef-ba60-3508ead7bdda", 00:09:29.376 "strip_size_kb": 64, 00:09:29.376 "state": "offline", 00:09:29.376 "raid_level": "raid0", 00:09:29.376 "superblock": true, 00:09:29.376 "num_base_bdevs": 4, 00:09:29.376 "num_base_bdevs_discovered": 3, 00:09:29.376 "num_base_bdevs_operational": 3, 00:09:29.376 "base_bdevs_list": [ 00:09:29.376 { 00:09:29.376 "name": null, 00:09:29.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.376 "is_configured": false, 00:09:29.376 "data_offset": 2048, 00:09:29.376 "data_size": 63488 00:09:29.376 }, 00:09:29.376 { 00:09:29.376 "name": "BaseBdev2", 00:09:29.376 "uuid": "40b3ab90-10ee-11ef-ba60-3508ead7bdda", 00:09:29.376 "is_configured": true, 00:09:29.376 "data_offset": 2048, 00:09:29.376 "data_size": 63488 00:09:29.376 }, 00:09:29.376 { 00:09:29.376 "name": "BaseBdev3", 00:09:29.376 "uuid": "414459f0-10ee-11ef-ba60-3508ead7bdda", 00:09:29.376 "is_configured": true, 00:09:29.376 "data_offset": 2048, 00:09:29.376 "data_size": 63488 00:09:29.376 }, 00:09:29.376 { 00:09:29.376 "name": "BaseBdev4", 00:09:29.376 "uuid": "41d1fb6b-10ee-11ef-ba60-3508ead7bdda", 00:09:29.376 "is_configured": true, 00:09:29.376 "data_offset": 2048, 00:09:29.376 "data_size": 63488 00:09:29.376 } 00:09:29.376 ] 00:09:29.376 }' 00:09:29.376 06:01:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:29.376 06:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:29.635 06:01:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:29.635 06:01:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:29.635 06:01:37 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.635 06:01:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:29.635 06:01:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:29.636 06:01:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.636 06:01:37 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:29.895 [2024-05-13 06:01:38.076622] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:29.895 06:01:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:29.895 06:01:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:29.895 06:01:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:29.895 06:01:38 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:30.155 [2024-05-13 06:01:38.421325] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.155 06:01:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:30.414 06:01:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:30.414 06:01:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.414 06:01:38 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:30.414 [2024-05-13 06:01:38.722028] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:30.414 [2024-05-13 06:01:38.722050] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82de17a00 name Existed_Raid, state offline 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:30.673 06:01:38 -- bdev/bdev_raid.sh@287 -- # killprocess 51706 00:09:30.673 06:01:38 -- common/autotest_common.sh@926 -- # '[' -z 51706 ']' 00:09:30.673 06:01:38 -- common/autotest_common.sh@930 -- # kill -0 51706 00:09:30.673 06:01:38 -- common/autotest_common.sh@931 -- # uname 00:09:30.673 06:01:38 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:30.673 06:01:38 -- common/autotest_common.sh@934 -- # ps -c -o command 51706 00:09:30.673 06:01:38 -- common/autotest_common.sh@934 -- # tail -1 00:09:30.673 06:01:38 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:30.673 06:01:38 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:30.673 killing process with pid 51706 00:09:30.673 06:01:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51706' 00:09:30.673 06:01:38 -- common/autotest_common.sh@945 -- # kill 51706 00:09:30.673 [2024-05-13 06:01:38.903165] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.674 [2024-05-13 06:01:38.903197] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.674 06:01:38 -- common/autotest_common.sh@950 -- # wait 51706 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:30.933 00:09:30.933 real 0m8.836s 00:09:30.933 user 0m15.417s 00:09:30.933 sys 0m1.577s 00:09:30.933 06:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.933 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.933 ************************************ 00:09:30.933 END TEST raid_state_function_test_sb 00:09:30.933 ************************************ 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:30.933 06:01:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:30.933 06:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.933 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.933 ************************************ 00:09:30.933 START TEST raid_superblock_test 00:09:30.933 ************************************ 00:09:30.933 06:01:39 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=51979 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51979 /var/tmp/spdk-raid.sock 00:09:30.933 06:01:39 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:30.933 06:01:39 -- common/autotest_common.sh@819 -- # '[' -z 51979 ']' 00:09:30.933 06:01:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:30.933 06:01:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:30.933 06:01:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:30.933 06:01:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.933 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.933 [2024-05-13 06:01:39.112488] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:30.933 [2024-05-13 06:01:39.112774] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:31.502 EAL: TSC is not safe to use in SMP mode 00:09:31.502 EAL: TSC is not invariant 00:09:31.502 [2024-05-13 06:01:39.530160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.502 [2024-05-13 06:01:39.614679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.502 [2024-05-13 06:01:39.615094] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.502 [2024-05-13 06:01:39.615105] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.761 06:01:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:31.761 06:01:39 -- common/autotest_common.sh@852 -- # return 0 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:31.761 06:01:39 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:32.020 malloc1 00:09:32.020 06:01:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:32.020 [2024-05-13 06:01:40.322211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:32.020 [2024-05-13 06:01:40.322261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.020 [2024-05-13 06:01:40.322767] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c886780 00:09:32.020 [2024-05-13 06:01:40.322790] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.020 [2024-05-13 06:01:40.323434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.020 [2024-05-13 06:01:40.323469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:32.020 pt1 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:32.279 malloc2 00:09:32.279 06:01:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.539 [2024-05-13 06:01:40.642268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.539 [2024-05-13 06:01:40.642310] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.539 [2024-05-13 06:01:40.642347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c886c80 00:09:32.539 [2024-05-13 06:01:40.642354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.539 [2024-05-13 06:01:40.642740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.539 [2024-05-13 06:01:40.642772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.539 pt2 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:32.539 malloc3 00:09:32.539 06:01:40 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:32.798 [2024-05-13 06:01:40.982323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:32.798 [2024-05-13 06:01:40.982362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.798 [2024-05-13 06:01:40.982400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c887180 00:09:32.798 [2024-05-13 06:01:40.982406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.798 [2024-05-13 06:01:40.982810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.798 [2024-05-13 06:01:40.982841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:32.798 pt3 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:32.798 06:01:40 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:09:33.058 malloc4 00:09:33.058 06:01:41 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:33.058 [2024-05-13 06:01:41.326387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:33.058 [2024-05-13 06:01:41.326449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.058 [2024-05-13 06:01:41.326472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c887680 00:09:33.058 [2024-05-13 06:01:41.326478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.058 [2024-05-13 06:01:41.326861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.058 [2024-05-13 06:01:41.326892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:33.058 pt4 00:09:33.058 06:01:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:33.058 06:01:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:33.058 06:01:41 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:09:33.317 [2024-05-13 06:01:41.474418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.317 [2024-05-13 06:01:41.474737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.317 [2024-05-13 06:01:41.474760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.317 [2024-05-13 06:01:41.474769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:33.317 [2024-05-13 06:01:41.474814] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c887900 00:09:33.317 [2024-05-13 06:01:41.474820] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:33.317 [2024-05-13 06:01:41.474843] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c8e9e20 00:09:33.317 [2024-05-13 06:01:41.474890] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c887900 00:09:33.317 [2024-05-13 06:01:41.474895] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c887900 00:09:33.317 [2024-05-13 06:01:41.474912] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.317 06:01:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:33.317 06:01:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:33.317 06:01:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:33.317 06:01:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.318 06:01:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.577 06:01:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:33.577 "name": "raid_bdev1", 00:09:33.577 "uuid": "44d37450-10ee-11ef-ba60-3508ead7bdda", 00:09:33.577 "strip_size_kb": 64, 00:09:33.577 "state": "online", 00:09:33.577 "raid_level": "raid0", 00:09:33.577 "superblock": true, 00:09:33.577 "num_base_bdevs": 4, 00:09:33.577 "num_base_bdevs_discovered": 4, 00:09:33.577 "num_base_bdevs_operational": 4, 00:09:33.577 "base_bdevs_list": [ 00:09:33.577 { 00:09:33.577 "name": "pt1", 00:09:33.577 "uuid": "02048b26-498f-f051-9edb-3ad6ae97baee", 00:09:33.577 "is_configured": true, 00:09:33.577 "data_offset": 2048, 00:09:33.577 "data_size": 63488 00:09:33.577 }, 00:09:33.577 { 00:09:33.577 "name": "pt2", 00:09:33.577 "uuid": "d77fb9fd-6acf-3451-94a3-cbd5d8d502fa", 00:09:33.577 "is_configured": true, 00:09:33.577 "data_offset": 2048, 00:09:33.577 "data_size": 63488 00:09:33.577 }, 00:09:33.577 { 00:09:33.577 "name": "pt3", 00:09:33.577 "uuid": "b9faa9d4-2a0f-8157-b543-5d022cd97f1b", 00:09:33.577 "is_configured": true, 00:09:33.577 "data_offset": 2048, 00:09:33.577 "data_size": 63488 00:09:33.577 }, 00:09:33.577 { 00:09:33.577 "name": "pt4", 00:09:33.577 "uuid": "6a7d163a-abe6-3152-af7a-97cb1724cd10", 00:09:33.577 "is_configured": true, 00:09:33.577 "data_offset": 2048, 00:09:33.577 "data_size": 63488 00:09:33.577 } 00:09:33.577 ] 00:09:33.577 }' 00:09:33.577 06:01:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:33.577 06:01:41 -- common/autotest_common.sh@10 -- # set +x 00:09:33.836 06:01:41 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:33.836 06:01:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:33.836 [2024-05-13 06:01:42.082532] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.836 06:01:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=44d37450-10ee-11ef-ba60-3508ead7bdda 00:09:33.836 06:01:42 -- bdev/bdev_raid.sh@380 -- # '[' -z 44d37450-10ee-11ef-ba60-3508ead7bdda ']' 00:09:33.836 06:01:42 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:34.096 [2024-05-13 06:01:42.258536] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.096 [2024-05-13 06:01:42.258552] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.096 [2024-05-13 06:01:42.258563] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.096 [2024-05-13 06:01:42.258590] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.096 [2024-05-13 06:01:42.258594] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c887900 name raid_bdev1, state offline 00:09:34.096 06:01:42 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.096 06:01:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:34.356 06:01:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:34.356 06:01:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:34.356 06:01:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.356 06:01:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:34.356 06:01:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.356 06:01:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:34.615 06:01:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.615 06:01:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:34.615 06:01:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.615 06:01:42 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:34.874 06:01:43 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:34.874 06:01:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:35.134 06:01:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:35.134 06:01:43 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:35.134 06:01:43 -- common/autotest_common.sh@640 -- # local es=0 00:09:35.134 06:01:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:35.134 06:01:43 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.134 06:01:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.134 06:01:43 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.134 06:01:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.134 06:01:43 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.134 06:01:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:35.134 06:01:43 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.134 06:01:43 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:35.134 06:01:43 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:35.134 [2024-05-13 06:01:43.410737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:35.134 [2024-05-13 06:01:43.411187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:35.134 [2024-05-13 06:01:43.411208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:35.134 [2024-05-13 06:01:43.411215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:35.134 [2024-05-13 06:01:43.411225] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:35.134 [2024-05-13 06:01:43.411257] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:35.134 [2024-05-13 06:01:43.411266] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:35.134 [2024-05-13 06:01:43.411290] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:35.134 [2024-05-13 06:01:43.411298] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:35.134 [2024-05-13 06:01:43.411301] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c887680 name raid_bdev1, state configuring 00:09:35.134 request: 00:09:35.134 { 00:09:35.134 "name": "raid_bdev1", 00:09:35.134 "raid_level": "raid0", 00:09:35.134 "base_bdevs": [ 00:09:35.134 "malloc1", 00:09:35.134 "malloc2", 00:09:35.134 "malloc3", 00:09:35.134 "malloc4" 00:09:35.134 ], 00:09:35.134 "superblock": false, 00:09:35.134 "strip_size_kb": 64, 00:09:35.134 "method": "bdev_raid_create", 00:09:35.134 "req_id": 1 00:09:35.134 } 00:09:35.134 Got JSON-RPC error response 00:09:35.134 response: 00:09:35.134 { 00:09:35.134 "code": -17, 00:09:35.134 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:35.134 } 00:09:35.134 06:01:43 -- common/autotest_common.sh@643 -- # es=1 00:09:35.134 06:01:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:35.134 06:01:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:35.134 06:01:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:35.134 06:01:43 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.134 06:01:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:35.393 06:01:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:35.393 06:01:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:35.393 06:01:43 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:35.653 [2024-05-13 06:01:43.734794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:35.653 [2024-05-13 06:01:43.734828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.653 [2024-05-13 06:01:43.734868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c887180 00:09:35.653 [2024-05-13 06:01:43.734874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.653 [2024-05-13 06:01:43.735332] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.653 [2024-05-13 06:01:43.735362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:35.653 [2024-05-13 06:01:43.735379] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:35.653 [2024-05-13 06:01:43.735400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:35.653 pt1 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:35.653 "name": "raid_bdev1", 00:09:35.653 "uuid": "44d37450-10ee-11ef-ba60-3508ead7bdda", 00:09:35.653 "strip_size_kb": 64, 00:09:35.653 "state": "configuring", 00:09:35.653 "raid_level": "raid0", 00:09:35.653 "superblock": true, 00:09:35.653 "num_base_bdevs": 4, 00:09:35.653 "num_base_bdevs_discovered": 1, 00:09:35.653 "num_base_bdevs_operational": 4, 00:09:35.653 "base_bdevs_list": [ 00:09:35.653 { 00:09:35.653 "name": "pt1", 00:09:35.653 "uuid": "02048b26-498f-f051-9edb-3ad6ae97baee", 00:09:35.653 "is_configured": true, 00:09:35.653 "data_offset": 2048, 00:09:35.653 "data_size": 63488 00:09:35.653 }, 00:09:35.653 { 00:09:35.653 "name": null, 00:09:35.653 "uuid": "d77fb9fd-6acf-3451-94a3-cbd5d8d502fa", 00:09:35.653 "is_configured": false, 00:09:35.653 "data_offset": 2048, 00:09:35.653 "data_size": 63488 00:09:35.653 }, 00:09:35.653 { 00:09:35.653 "name": null, 00:09:35.653 "uuid": "b9faa9d4-2a0f-8157-b543-5d022cd97f1b", 00:09:35.653 "is_configured": false, 00:09:35.653 "data_offset": 2048, 00:09:35.653 "data_size": 63488 00:09:35.653 }, 00:09:35.653 { 00:09:35.653 "name": null, 00:09:35.653 "uuid": "6a7d163a-abe6-3152-af7a-97cb1724cd10", 00:09:35.653 "is_configured": false, 00:09:35.653 "data_offset": 2048, 00:09:35.653 "data_size": 63488 00:09:35.653 } 00:09:35.653 ] 00:09:35.653 }' 00:09:35.653 06:01:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:35.653 06:01:43 -- common/autotest_common.sh@10 -- # set +x 00:09:35.911 06:01:44 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:09:35.911 06:01:44 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.169 [2024-05-13 06:01:44.330893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.169 [2024-05-13 06:01:44.330928] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.169 [2024-05-13 06:01:44.330952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c886780 00:09:36.169 [2024-05-13 06:01:44.330957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.169 [2024-05-13 06:01:44.331043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.169 [2024-05-13 06:01:44.331051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.169 [2024-05-13 06:01:44.331069] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:36.170 [2024-05-13 06:01:44.331075] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.170 pt2 00:09:36.170 06:01:44 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:36.170 [2024-05-13 06:01:44.478914] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:36.429 "name": "raid_bdev1", 00:09:36.429 "uuid": "44d37450-10ee-11ef-ba60-3508ead7bdda", 00:09:36.429 "strip_size_kb": 64, 00:09:36.429 "state": "configuring", 00:09:36.429 "raid_level": "raid0", 00:09:36.429 "superblock": true, 00:09:36.429 "num_base_bdevs": 4, 00:09:36.429 "num_base_bdevs_discovered": 1, 00:09:36.429 "num_base_bdevs_operational": 4, 00:09:36.429 "base_bdevs_list": [ 00:09:36.429 { 00:09:36.429 "name": "pt1", 00:09:36.429 "uuid": "02048b26-498f-f051-9edb-3ad6ae97baee", 00:09:36.429 "is_configured": true, 00:09:36.429 "data_offset": 2048, 00:09:36.429 "data_size": 63488 00:09:36.429 }, 00:09:36.429 { 00:09:36.429 "name": null, 00:09:36.429 "uuid": "d77fb9fd-6acf-3451-94a3-cbd5d8d502fa", 00:09:36.429 "is_configured": false, 00:09:36.429 "data_offset": 2048, 00:09:36.429 "data_size": 63488 00:09:36.429 }, 00:09:36.429 { 00:09:36.429 "name": null, 00:09:36.429 "uuid": "b9faa9d4-2a0f-8157-b543-5d022cd97f1b", 00:09:36.429 "is_configured": false, 00:09:36.429 "data_offset": 2048, 00:09:36.429 "data_size": 63488 00:09:36.429 }, 00:09:36.429 { 00:09:36.429 "name": null, 00:09:36.429 "uuid": "6a7d163a-abe6-3152-af7a-97cb1724cd10", 00:09:36.429 "is_configured": false, 00:09:36.429 "data_offset": 2048, 00:09:36.429 "data_size": 63488 00:09:36.429 } 00:09:36.429 ] 00:09:36.429 }' 00:09:36.429 06:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:36.429 06:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:36.688 06:01:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:36.688 06:01:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:36.688 06:01:44 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:36.947 [2024-05-13 06:01:45.091018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:36.947 [2024-05-13 06:01:45.091056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.947 [2024-05-13 06:01:45.091092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c886780 00:09:36.947 [2024-05-13 06:01:45.091099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.947 [2024-05-13 06:01:45.091167] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.947 [2024-05-13 06:01:45.091174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:36.947 [2024-05-13 06:01:45.091188] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:36.947 [2024-05-13 06:01:45.091195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:36.947 pt2 00:09:36.947 06:01:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:36.947 06:01:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:36.947 06:01:45 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:37.206 [2024-05-13 06:01:45.263045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:37.206 [2024-05-13 06:01:45.263078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.206 [2024-05-13 06:01:45.263095] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c887b80 00:09:37.206 [2024-05-13 06:01:45.263101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.206 [2024-05-13 06:01:45.263170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.206 [2024-05-13 06:01:45.263177] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:37.206 [2024-05-13 06:01:45.263189] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:09:37.206 [2024-05-13 06:01:45.263195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:37.206 pt3 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:37.206 [2024-05-13 06:01:45.435073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:37.206 [2024-05-13 06:01:45.435106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.206 [2024-05-13 06:01:45.435119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c887900 00:09:37.206 [2024-05-13 06:01:45.435125] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.206 [2024-05-13 06:01:45.435189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.206 [2024-05-13 06:01:45.435196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:37.206 [2024-05-13 06:01:45.435207] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:09:37.206 [2024-05-13 06:01:45.435212] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:37.206 [2024-05-13 06:01:45.435231] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c886c80 00:09:37.206 [2024-05-13 06:01:45.435234] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:37.206 [2024-05-13 06:01:45.435249] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c8e9e20 00:09:37.206 [2024-05-13 06:01:45.435283] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c886c80 00:09:37.206 [2024-05-13 06:01:45.435286] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c886c80 00:09:37.206 [2024-05-13 06:01:45.435301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.206 pt4 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.206 06:01:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:37.465 06:01:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:37.465 "name": "raid_bdev1", 00:09:37.465 "uuid": "44d37450-10ee-11ef-ba60-3508ead7bdda", 00:09:37.465 "strip_size_kb": 64, 00:09:37.465 "state": "online", 00:09:37.465 "raid_level": "raid0", 00:09:37.465 "superblock": true, 00:09:37.465 "num_base_bdevs": 4, 00:09:37.465 "num_base_bdevs_discovered": 4, 00:09:37.465 "num_base_bdevs_operational": 4, 00:09:37.465 "base_bdevs_list": [ 00:09:37.465 { 00:09:37.466 "name": "pt1", 00:09:37.466 "uuid": "02048b26-498f-f051-9edb-3ad6ae97baee", 00:09:37.466 "is_configured": true, 00:09:37.466 "data_offset": 2048, 00:09:37.466 "data_size": 63488 00:09:37.466 }, 00:09:37.466 { 00:09:37.466 "name": "pt2", 00:09:37.466 "uuid": "d77fb9fd-6acf-3451-94a3-cbd5d8d502fa", 00:09:37.466 "is_configured": true, 00:09:37.466 "data_offset": 2048, 00:09:37.466 "data_size": 63488 00:09:37.466 }, 00:09:37.466 { 00:09:37.466 "name": "pt3", 00:09:37.466 "uuid": "b9faa9d4-2a0f-8157-b543-5d022cd97f1b", 00:09:37.466 "is_configured": true, 00:09:37.466 "data_offset": 2048, 00:09:37.466 "data_size": 63488 00:09:37.466 }, 00:09:37.466 { 00:09:37.466 "name": "pt4", 00:09:37.466 "uuid": "6a7d163a-abe6-3152-af7a-97cb1724cd10", 00:09:37.466 "is_configured": true, 00:09:37.466 "data_offset": 2048, 00:09:37.466 "data_size": 63488 00:09:37.466 } 00:09:37.466 ] 00:09:37.466 }' 00:09:37.466 06:01:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:37.466 06:01:45 -- common/autotest_common.sh@10 -- # set +x 00:09:37.725 06:01:45 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:37.725 06:01:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:37.985 [2024-05-13 06:01:46.043196] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@430 -- # '[' 44d37450-10ee-11ef-ba60-3508ead7bdda '!=' 44d37450-10ee-11ef-ba60-3508ead7bdda ']' 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@511 -- # killprocess 51979 00:09:37.985 06:01:46 -- common/autotest_common.sh@926 -- # '[' -z 51979 ']' 00:09:37.985 06:01:46 -- common/autotest_common.sh@930 -- # kill -0 51979 00:09:37.985 06:01:46 -- common/autotest_common.sh@931 -- # uname 00:09:37.985 06:01:46 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:37.985 06:01:46 -- common/autotest_common.sh@934 -- # ps -c -o command 51979 00:09:37.985 06:01:46 -- common/autotest_common.sh@934 -- # tail -1 00:09:37.985 06:01:46 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:37.985 06:01:46 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:37.985 killing process with pid 51979 00:09:37.985 06:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51979' 00:09:37.985 06:01:46 -- common/autotest_common.sh@945 -- # kill 51979 00:09:37.985 [2024-05-13 06:01:46.073547] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.985 [2024-05-13 06:01:46.073562] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.985 [2024-05-13 06:01:46.073585] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.985 [2024-05-13 06:01:46.073588] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c886c80 name raid_bdev1, state offline 00:09:37.985 06:01:46 -- common/autotest_common.sh@950 -- # wait 51979 00:09:37.985 [2024-05-13 06:01:46.092093] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:37.985 00:09:37.985 real 0m7.128s 00:09:37.985 user 0m12.167s 00:09:37.985 sys 0m1.365s 00:09:37.985 06:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.985 06:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.985 ************************************ 00:09:37.985 END TEST raid_superblock_test 00:09:37.985 ************************************ 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:37.985 06:01:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:37.985 06:01:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.985 06:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.985 ************************************ 00:09:37.985 START TEST raid_state_function_test 00:09:37.985 ************************************ 00:09:37.985 06:01:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=52164 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52164' 00:09:37.985 Process raid pid: 52164 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:37.985 06:01:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52164 /var/tmp/spdk-raid.sock 00:09:37.985 06:01:46 -- common/autotest_common.sh@819 -- # '[' -z 52164 ']' 00:09:37.985 06:01:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:37.985 06:01:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:37.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:37.985 06:01:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:37.985 06:01:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:37.985 06:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:38.246 [2024-05-13 06:01:46.308232] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:38.246 [2024-05-13 06:01:46.308575] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:38.505 EAL: TSC is not safe to use in SMP mode 00:09:38.505 EAL: TSC is not invariant 00:09:38.506 [2024-05-13 06:01:46.726571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.506 [2024-05-13 06:01:46.798967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.506 [2024-05-13 06:01:46.799383] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.506 [2024-05-13 06:01:46.799393] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.074 06:01:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.074 06:01:47 -- common/autotest_common.sh@852 -- # return 0 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:39.074 [2024-05-13 06:01:47.342508] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.074 [2024-05-13 06:01:47.342553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.074 [2024-05-13 06:01:47.342557] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.074 [2024-05-13 06:01:47.342563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.074 [2024-05-13 06:01:47.342565] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.074 [2024-05-13 06:01:47.342570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.074 [2024-05-13 06:01:47.342573] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:39.074 [2024-05-13 06:01:47.342595] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:39.074 06:01:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.075 06:01:47 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.334 06:01:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:39.334 "name": "Existed_Raid", 00:09:39.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.334 "strip_size_kb": 64, 00:09:39.334 "state": "configuring", 00:09:39.334 "raid_level": "concat", 00:09:39.334 "superblock": false, 00:09:39.334 "num_base_bdevs": 4, 00:09:39.334 "num_base_bdevs_discovered": 0, 00:09:39.335 "num_base_bdevs_operational": 4, 00:09:39.335 "base_bdevs_list": [ 00:09:39.335 { 00:09:39.335 "name": "BaseBdev1", 00:09:39.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.335 "is_configured": false, 00:09:39.335 "data_offset": 0, 00:09:39.335 "data_size": 0 00:09:39.335 }, 00:09:39.335 { 00:09:39.335 "name": "BaseBdev2", 00:09:39.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.335 "is_configured": false, 00:09:39.335 "data_offset": 0, 00:09:39.335 "data_size": 0 00:09:39.335 }, 00:09:39.335 { 00:09:39.335 "name": "BaseBdev3", 00:09:39.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.335 "is_configured": false, 00:09:39.335 "data_offset": 0, 00:09:39.335 "data_size": 0 00:09:39.335 }, 00:09:39.335 { 00:09:39.335 "name": "BaseBdev4", 00:09:39.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.335 "is_configured": false, 00:09:39.335 "data_offset": 0, 00:09:39.335 "data_size": 0 00:09:39.335 } 00:09:39.335 ] 00:09:39.335 }' 00:09:39.335 06:01:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:39.335 06:01:47 -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 06:01:47 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:39.854 [2024-05-13 06:01:47.942584] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.854 [2024-05-13 06:01:47.942601] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82adad500 name Existed_Raid, state configuring 00:09:39.854 06:01:47 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:39.854 [2024-05-13 06:01:48.118615] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.854 [2024-05-13 06:01:48.118648] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.854 [2024-05-13 06:01:48.118651] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.854 [2024-05-13 06:01:48.118656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.854 [2024-05-13 06:01:48.118659] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.854 [2024-05-13 06:01:48.118664] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.854 [2024-05-13 06:01:48.118666] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:39.854 [2024-05-13 06:01:48.118671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:39.854 06:01:48 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.143 [2024-05-13 06:01:48.267381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.143 BaseBdev1 00:09:40.143 06:01:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:40.143 06:01:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:40.143 06:01:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:40.143 06:01:48 -- common/autotest_common.sh@889 -- # local i 00:09:40.143 06:01:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:40.143 06:01:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:40.143 06:01:48 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:40.143 06:01:48 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.457 [ 00:09:40.457 { 00:09:40.457 "name": "BaseBdev1", 00:09:40.457 "aliases": [ 00:09:40.457 "48dfddea-10ee-11ef-ba60-3508ead7bdda" 00:09:40.457 ], 00:09:40.457 "product_name": "Malloc disk", 00:09:40.458 "block_size": 512, 00:09:40.458 "num_blocks": 65536, 00:09:40.458 "uuid": "48dfddea-10ee-11ef-ba60-3508ead7bdda", 00:09:40.458 "assigned_rate_limits": { 00:09:40.458 "rw_ios_per_sec": 0, 00:09:40.458 "rw_mbytes_per_sec": 0, 00:09:40.458 "r_mbytes_per_sec": 0, 00:09:40.458 "w_mbytes_per_sec": 0 00:09:40.458 }, 00:09:40.458 "claimed": true, 00:09:40.458 "claim_type": "exclusive_write", 00:09:40.458 "zoned": false, 00:09:40.458 "supported_io_types": { 00:09:40.458 "read": true, 00:09:40.458 "write": true, 00:09:40.458 "unmap": true, 00:09:40.458 "write_zeroes": true, 00:09:40.458 "flush": true, 00:09:40.458 "reset": true, 00:09:40.458 "compare": false, 00:09:40.458 "compare_and_write": false, 00:09:40.458 "abort": true, 00:09:40.458 "nvme_admin": false, 00:09:40.458 "nvme_io": false 00:09:40.458 }, 00:09:40.458 "memory_domains": [ 00:09:40.458 { 00:09:40.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.458 "dma_device_type": 2 00:09:40.458 } 00:09:40.458 ], 00:09:40.458 "driver_specific": {} 00:09:40.458 } 00:09:40.458 ] 00:09:40.458 06:01:48 -- common/autotest_common.sh@895 -- # return 0 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.458 06:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.718 06:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:40.718 "name": "Existed_Raid", 00:09:40.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.718 "strip_size_kb": 64, 00:09:40.718 "state": "configuring", 00:09:40.718 "raid_level": "concat", 00:09:40.718 "superblock": false, 00:09:40.718 "num_base_bdevs": 4, 00:09:40.718 "num_base_bdevs_discovered": 1, 00:09:40.718 "num_base_bdevs_operational": 4, 00:09:40.718 "base_bdevs_list": [ 00:09:40.718 { 00:09:40.718 "name": "BaseBdev1", 00:09:40.718 "uuid": "48dfddea-10ee-11ef-ba60-3508ead7bdda", 00:09:40.718 "is_configured": true, 00:09:40.718 "data_offset": 0, 00:09:40.718 "data_size": 65536 00:09:40.718 }, 00:09:40.718 { 00:09:40.718 "name": "BaseBdev2", 00:09:40.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.718 "is_configured": false, 00:09:40.718 "data_offset": 0, 00:09:40.718 "data_size": 0 00:09:40.718 }, 00:09:40.718 { 00:09:40.718 "name": "BaseBdev3", 00:09:40.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.718 "is_configured": false, 00:09:40.718 "data_offset": 0, 00:09:40.718 "data_size": 0 00:09:40.718 }, 00:09:40.718 { 00:09:40.718 "name": "BaseBdev4", 00:09:40.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.718 "is_configured": false, 00:09:40.718 "data_offset": 0, 00:09:40.718 "data_size": 0 00:09:40.718 } 00:09:40.718 ] 00:09:40.718 }' 00:09:40.718 06:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:40.718 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:09:40.977 06:01:49 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:40.977 [2024-05-13 06:01:49.194777] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.977 [2024-05-13 06:01:49.194797] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82adad500 name Existed_Raid, state configuring 00:09:40.977 06:01:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:40.977 06:01:49 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:41.237 [2024-05-13 06:01:49.342808] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.237 [2024-05-13 06:01:49.343431] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.237 [2024-05-13 06:01:49.343477] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.237 [2024-05-13 06:01:49.343481] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:41.237 [2024-05-13 06:01:49.343500] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:41.237 [2024-05-13 06:01:49.343503] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:41.237 [2024-05-13 06:01:49.343509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:41.237 06:01:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:41.237 06:01:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:41.237 06:01:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:41.237 06:01:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:41.237 06:01:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:41.237 06:01:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:41.238 "name": "Existed_Raid", 00:09:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.238 "strip_size_kb": 64, 00:09:41.238 "state": "configuring", 00:09:41.238 "raid_level": "concat", 00:09:41.238 "superblock": false, 00:09:41.238 "num_base_bdevs": 4, 00:09:41.238 "num_base_bdevs_discovered": 1, 00:09:41.238 "num_base_bdevs_operational": 4, 00:09:41.238 "base_bdevs_list": [ 00:09:41.238 { 00:09:41.238 "name": "BaseBdev1", 00:09:41.238 "uuid": "48dfddea-10ee-11ef-ba60-3508ead7bdda", 00:09:41.238 "is_configured": true, 00:09:41.238 "data_offset": 0, 00:09:41.238 "data_size": 65536 00:09:41.238 }, 00:09:41.238 { 00:09:41.238 "name": "BaseBdev2", 00:09:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.238 "is_configured": false, 00:09:41.238 "data_offset": 0, 00:09:41.238 "data_size": 0 00:09:41.238 }, 00:09:41.238 { 00:09:41.238 "name": "BaseBdev3", 00:09:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.238 "is_configured": false, 00:09:41.238 "data_offset": 0, 00:09:41.238 "data_size": 0 00:09:41.238 }, 00:09:41.238 { 00:09:41.238 "name": "BaseBdev4", 00:09:41.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.238 "is_configured": false, 00:09:41.238 "data_offset": 0, 00:09:41.238 "data_size": 0 00:09:41.238 } 00:09:41.238 ] 00:09:41.238 }' 00:09:41.238 06:01:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:41.238 06:01:49 -- common/autotest_common.sh@10 -- # set +x 00:09:41.498 06:01:49 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.757 [2024-05-13 06:01:49.959016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.757 BaseBdev2 00:09:41.757 06:01:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:41.757 06:01:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:41.757 06:01:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:41.757 06:01:49 -- common/autotest_common.sh@889 -- # local i 00:09:41.757 06:01:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:41.757 06:01:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:41.757 06:01:49 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:42.018 06:01:50 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:42.018 [ 00:09:42.018 { 00:09:42.018 "name": "BaseBdev2", 00:09:42.018 "aliases": [ 00:09:42.018 "49e2169d-10ee-11ef-ba60-3508ead7bdda" 00:09:42.018 ], 00:09:42.018 "product_name": "Malloc disk", 00:09:42.018 "block_size": 512, 00:09:42.018 "num_blocks": 65536, 00:09:42.018 "uuid": "49e2169d-10ee-11ef-ba60-3508ead7bdda", 00:09:42.018 "assigned_rate_limits": { 00:09:42.018 "rw_ios_per_sec": 0, 00:09:42.018 "rw_mbytes_per_sec": 0, 00:09:42.018 "r_mbytes_per_sec": 0, 00:09:42.018 "w_mbytes_per_sec": 0 00:09:42.018 }, 00:09:42.018 "claimed": true, 00:09:42.018 "claim_type": "exclusive_write", 00:09:42.018 "zoned": false, 00:09:42.018 "supported_io_types": { 00:09:42.018 "read": true, 00:09:42.018 "write": true, 00:09:42.018 "unmap": true, 00:09:42.018 "write_zeroes": true, 00:09:42.018 "flush": true, 00:09:42.018 "reset": true, 00:09:42.018 "compare": false, 00:09:42.018 "compare_and_write": false, 00:09:42.018 "abort": true, 00:09:42.018 "nvme_admin": false, 00:09:42.018 "nvme_io": false 00:09:42.018 }, 00:09:42.018 "memory_domains": [ 00:09:42.018 { 00:09:42.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.018 "dma_device_type": 2 00:09:42.018 } 00:09:42.018 ], 00:09:42.018 "driver_specific": {} 00:09:42.018 } 00:09:42.018 ] 00:09:42.018 06:01:50 -- common/autotest_common.sh@895 -- # return 0 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.018 06:01:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.278 06:01:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:42.278 "name": "Existed_Raid", 00:09:42.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.278 "strip_size_kb": 64, 00:09:42.278 "state": "configuring", 00:09:42.278 "raid_level": "concat", 00:09:42.278 "superblock": false, 00:09:42.278 "num_base_bdevs": 4, 00:09:42.278 "num_base_bdevs_discovered": 2, 00:09:42.278 "num_base_bdevs_operational": 4, 00:09:42.278 "base_bdevs_list": [ 00:09:42.278 { 00:09:42.278 "name": "BaseBdev1", 00:09:42.278 "uuid": "48dfddea-10ee-11ef-ba60-3508ead7bdda", 00:09:42.278 "is_configured": true, 00:09:42.278 "data_offset": 0, 00:09:42.278 "data_size": 65536 00:09:42.278 }, 00:09:42.278 { 00:09:42.278 "name": "BaseBdev2", 00:09:42.278 "uuid": "49e2169d-10ee-11ef-ba60-3508ead7bdda", 00:09:42.278 "is_configured": true, 00:09:42.278 "data_offset": 0, 00:09:42.278 "data_size": 65536 00:09:42.278 }, 00:09:42.278 { 00:09:42.278 "name": "BaseBdev3", 00:09:42.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.278 "is_configured": false, 00:09:42.278 "data_offset": 0, 00:09:42.278 "data_size": 0 00:09:42.278 }, 00:09:42.278 { 00:09:42.278 "name": "BaseBdev4", 00:09:42.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.278 "is_configured": false, 00:09:42.278 "data_offset": 0, 00:09:42.278 "data_size": 0 00:09:42.278 } 00:09:42.278 ] 00:09:42.278 }' 00:09:42.278 06:01:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:42.278 06:01:50 -- common/autotest_common.sh@10 -- # set +x 00:09:42.537 06:01:50 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:42.797 [2024-05-13 06:01:50.911145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.797 BaseBdev3 00:09:42.797 06:01:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:42.797 06:01:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:42.797 06:01:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:42.797 06:01:50 -- common/autotest_common.sh@889 -- # local i 00:09:42.797 06:01:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:42.797 06:01:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:42.797 06:01:50 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:42.797 06:01:51 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.057 [ 00:09:43.057 { 00:09:43.057 "name": "BaseBdev3", 00:09:43.057 "aliases": [ 00:09:43.057 "4a736013-10ee-11ef-ba60-3508ead7bdda" 00:09:43.057 ], 00:09:43.057 "product_name": "Malloc disk", 00:09:43.057 "block_size": 512, 00:09:43.057 "num_blocks": 65536, 00:09:43.057 "uuid": "4a736013-10ee-11ef-ba60-3508ead7bdda", 00:09:43.057 "assigned_rate_limits": { 00:09:43.057 "rw_ios_per_sec": 0, 00:09:43.057 "rw_mbytes_per_sec": 0, 00:09:43.057 "r_mbytes_per_sec": 0, 00:09:43.057 "w_mbytes_per_sec": 0 00:09:43.057 }, 00:09:43.057 "claimed": true, 00:09:43.057 "claim_type": "exclusive_write", 00:09:43.057 "zoned": false, 00:09:43.057 "supported_io_types": { 00:09:43.057 "read": true, 00:09:43.057 "write": true, 00:09:43.057 "unmap": true, 00:09:43.057 "write_zeroes": true, 00:09:43.057 "flush": true, 00:09:43.057 "reset": true, 00:09:43.057 "compare": false, 00:09:43.057 "compare_and_write": false, 00:09:43.057 "abort": true, 00:09:43.057 "nvme_admin": false, 00:09:43.057 "nvme_io": false 00:09:43.057 }, 00:09:43.057 "memory_domains": [ 00:09:43.057 { 00:09:43.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.057 "dma_device_type": 2 00:09:43.057 } 00:09:43.057 ], 00:09:43.057 "driver_specific": {} 00:09:43.057 } 00:09:43.057 ] 00:09:43.057 06:01:51 -- common/autotest_common.sh@895 -- # return 0 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.057 06:01:51 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.317 06:01:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:43.317 "name": "Existed_Raid", 00:09:43.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.317 "strip_size_kb": 64, 00:09:43.317 "state": "configuring", 00:09:43.317 "raid_level": "concat", 00:09:43.317 "superblock": false, 00:09:43.317 "num_base_bdevs": 4, 00:09:43.317 "num_base_bdevs_discovered": 3, 00:09:43.317 "num_base_bdevs_operational": 4, 00:09:43.317 "base_bdevs_list": [ 00:09:43.317 { 00:09:43.317 "name": "BaseBdev1", 00:09:43.317 "uuid": "48dfddea-10ee-11ef-ba60-3508ead7bdda", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 0, 00:09:43.317 "data_size": 65536 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "name": "BaseBdev2", 00:09:43.317 "uuid": "49e2169d-10ee-11ef-ba60-3508ead7bdda", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 0, 00:09:43.317 "data_size": 65536 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "name": "BaseBdev3", 00:09:43.317 "uuid": "4a736013-10ee-11ef-ba60-3508ead7bdda", 00:09:43.317 "is_configured": true, 00:09:43.317 "data_offset": 0, 00:09:43.317 "data_size": 65536 00:09:43.317 }, 00:09:43.317 { 00:09:43.317 "name": "BaseBdev4", 00:09:43.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.317 "is_configured": false, 00:09:43.317 "data_offset": 0, 00:09:43.317 "data_size": 0 00:09:43.317 } 00:09:43.317 ] 00:09:43.317 }' 00:09:43.317 06:01:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:43.317 06:01:51 -- common/autotest_common.sh@10 -- # set +x 00:09:43.577 06:01:51 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:43.577 [2024-05-13 06:01:51.847281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:43.578 [2024-05-13 06:01:51.847303] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82adada00 00:09:43.578 [2024-05-13 06:01:51.847306] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:43.578 [2024-05-13 06:01:51.847326] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ae10ec0 00:09:43.578 [2024-05-13 06:01:51.847398] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82adada00 00:09:43.578 [2024-05-13 06:01:51.847401] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82adada00 00:09:43.578 [2024-05-13 06:01:51.847423] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.578 BaseBdev4 00:09:43.578 06:01:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:43.578 06:01:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:43.578 06:01:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:43.578 06:01:51 -- common/autotest_common.sh@889 -- # local i 00:09:43.578 06:01:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:43.578 06:01:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:43.578 06:01:51 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:43.837 06:01:52 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:44.098 [ 00:09:44.098 { 00:09:44.098 "name": "BaseBdev4", 00:09:44.098 "aliases": [ 00:09:44.098 "4b0237fb-10ee-11ef-ba60-3508ead7bdda" 00:09:44.098 ], 00:09:44.098 "product_name": "Malloc disk", 00:09:44.098 "block_size": 512, 00:09:44.098 "num_blocks": 65536, 00:09:44.098 "uuid": "4b0237fb-10ee-11ef-ba60-3508ead7bdda", 00:09:44.098 "assigned_rate_limits": { 00:09:44.098 "rw_ios_per_sec": 0, 00:09:44.098 "rw_mbytes_per_sec": 0, 00:09:44.098 "r_mbytes_per_sec": 0, 00:09:44.098 "w_mbytes_per_sec": 0 00:09:44.098 }, 00:09:44.098 "claimed": true, 00:09:44.098 "claim_type": "exclusive_write", 00:09:44.098 "zoned": false, 00:09:44.098 "supported_io_types": { 00:09:44.098 "read": true, 00:09:44.098 "write": true, 00:09:44.098 "unmap": true, 00:09:44.098 "write_zeroes": true, 00:09:44.098 "flush": true, 00:09:44.098 "reset": true, 00:09:44.098 "compare": false, 00:09:44.098 "compare_and_write": false, 00:09:44.098 "abort": true, 00:09:44.098 "nvme_admin": false, 00:09:44.098 "nvme_io": false 00:09:44.098 }, 00:09:44.098 "memory_domains": [ 00:09:44.098 { 00:09:44.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.098 "dma_device_type": 2 00:09:44.098 } 00:09:44.098 ], 00:09:44.098 "driver_specific": {} 00:09:44.098 } 00:09:44.098 ] 00:09:44.098 06:01:52 -- common/autotest_common.sh@895 -- # return 0 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:44.098 "name": "Existed_Raid", 00:09:44.098 "uuid": "4b023b89-10ee-11ef-ba60-3508ead7bdda", 00:09:44.098 "strip_size_kb": 64, 00:09:44.098 "state": "online", 00:09:44.098 "raid_level": "concat", 00:09:44.098 "superblock": false, 00:09:44.098 "num_base_bdevs": 4, 00:09:44.098 "num_base_bdevs_discovered": 4, 00:09:44.098 "num_base_bdevs_operational": 4, 00:09:44.098 "base_bdevs_list": [ 00:09:44.098 { 00:09:44.098 "name": "BaseBdev1", 00:09:44.098 "uuid": "48dfddea-10ee-11ef-ba60-3508ead7bdda", 00:09:44.098 "is_configured": true, 00:09:44.098 "data_offset": 0, 00:09:44.098 "data_size": 65536 00:09:44.098 }, 00:09:44.098 { 00:09:44.098 "name": "BaseBdev2", 00:09:44.098 "uuid": "49e2169d-10ee-11ef-ba60-3508ead7bdda", 00:09:44.098 "is_configured": true, 00:09:44.098 "data_offset": 0, 00:09:44.098 "data_size": 65536 00:09:44.098 }, 00:09:44.098 { 00:09:44.098 "name": "BaseBdev3", 00:09:44.098 "uuid": "4a736013-10ee-11ef-ba60-3508ead7bdda", 00:09:44.098 "is_configured": true, 00:09:44.098 "data_offset": 0, 00:09:44.098 "data_size": 65536 00:09:44.098 }, 00:09:44.098 { 00:09:44.098 "name": "BaseBdev4", 00:09:44.098 "uuid": "4b0237fb-10ee-11ef-ba60-3508ead7bdda", 00:09:44.098 "is_configured": true, 00:09:44.098 "data_offset": 0, 00:09:44.098 "data_size": 65536 00:09:44.098 } 00:09:44.098 ] 00:09:44.098 }' 00:09:44.098 06:01:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:44.098 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.358 06:01:52 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:44.617 [2024-05-13 06:01:52.763358] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.617 [2024-05-13 06:01:52.763376] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.617 [2024-05-13 06:01:52.763386] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.617 06:01:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.877 06:01:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:44.877 "name": "Existed_Raid", 00:09:44.877 "uuid": "4b023b89-10ee-11ef-ba60-3508ead7bdda", 00:09:44.877 "strip_size_kb": 64, 00:09:44.877 "state": "offline", 00:09:44.877 "raid_level": "concat", 00:09:44.877 "superblock": false, 00:09:44.877 "num_base_bdevs": 4, 00:09:44.877 "num_base_bdevs_discovered": 3, 00:09:44.877 "num_base_bdevs_operational": 3, 00:09:44.877 "base_bdevs_list": [ 00:09:44.877 { 00:09:44.877 "name": null, 00:09:44.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.877 "is_configured": false, 00:09:44.877 "data_offset": 0, 00:09:44.877 "data_size": 65536 00:09:44.877 }, 00:09:44.877 { 00:09:44.877 "name": "BaseBdev2", 00:09:44.877 "uuid": "49e2169d-10ee-11ef-ba60-3508ead7bdda", 00:09:44.877 "is_configured": true, 00:09:44.877 "data_offset": 0, 00:09:44.877 "data_size": 65536 00:09:44.877 }, 00:09:44.877 { 00:09:44.877 "name": "BaseBdev3", 00:09:44.877 "uuid": "4a736013-10ee-11ef-ba60-3508ead7bdda", 00:09:44.877 "is_configured": true, 00:09:44.877 "data_offset": 0, 00:09:44.877 "data_size": 65536 00:09:44.877 }, 00:09:44.877 { 00:09:44.877 "name": "BaseBdev4", 00:09:44.877 "uuid": "4b0237fb-10ee-11ef-ba60-3508ead7bdda", 00:09:44.877 "is_configured": true, 00:09:44.877 "data_offset": 0, 00:09:44.877 "data_size": 65536 00:09:44.877 } 00:09:44.877 ] 00:09:44.877 }' 00:09:44.877 06:01:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:44.877 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.138 06:01:53 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:45.398 [2024-05-13 06:01:53.536174] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.398 06:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:45.398 06:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:45.398 06:01:53 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.398 06:01:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:45.656 [2024-05-13 06:01:53.888904] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.656 06:01:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:45.915 06:01:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:45.915 06:01:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.915 06:01:54 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:46.175 [2024-05-13 06:01:54.241594] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:46.175 [2024-05-13 06:01:54.241614] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82adada00 name Existed_Raid, state offline 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:46.175 06:01:54 -- bdev/bdev_raid.sh@287 -- # killprocess 52164 00:09:46.175 06:01:54 -- common/autotest_common.sh@926 -- # '[' -z 52164 ']' 00:09:46.175 06:01:54 -- common/autotest_common.sh@930 -- # kill -0 52164 00:09:46.175 06:01:54 -- common/autotest_common.sh@931 -- # uname 00:09:46.175 06:01:54 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:46.175 06:01:54 -- common/autotest_common.sh@934 -- # ps -c -o command 52164 00:09:46.175 06:01:54 -- common/autotest_common.sh@934 -- # tail -1 00:09:46.175 killing process with pid 52164 00:09:46.175 06:01:54 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:46.175 06:01:54 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:46.175 06:01:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52164' 00:09:46.175 06:01:54 -- common/autotest_common.sh@945 -- # kill 52164 00:09:46.175 [2024-05-13 06:01:54.442296] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.175 06:01:54 -- common/autotest_common.sh@950 -- # wait 52164 00:09:46.175 [2024-05-13 06:01:54.442332] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:46.435 00:09:46.435 real 0m8.292s 00:09:46.435 user 0m14.253s 00:09:46.435 sys 0m1.645s 00:09:46.435 06:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.435 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:09:46.435 ************************************ 00:09:46.435 END TEST raid_state_function_test 00:09:46.435 ************************************ 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:46.435 06:01:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:46.435 06:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.435 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:09:46.435 ************************************ 00:09:46.435 START TEST raid_state_function_test_sb 00:09:46.435 ************************************ 00:09:46.435 06:01:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=52434 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52434' 00:09:46.435 Process raid pid: 52434 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:46.435 06:01:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52434 /var/tmp/spdk-raid.sock 00:09:46.435 06:01:54 -- common/autotest_common.sh@819 -- # '[' -z 52434 ']' 00:09:46.435 06:01:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:46.435 06:01:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:46.435 06:01:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:46.435 06:01:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.435 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:09:46.435 [2024-05-13 06:01:54.660256] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:46.435 [2024-05-13 06:01:54.660596] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:47.004 EAL: TSC is not safe to use in SMP mode 00:09:47.004 EAL: TSC is not invariant 00:09:47.004 [2024-05-13 06:01:55.079507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.004 [2024-05-13 06:01:55.166137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.004 [2024-05-13 06:01:55.166552] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.004 [2024-05-13 06:01:55.166565] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.263 06:01:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:47.263 06:01:55 -- common/autotest_common.sh@852 -- # return 0 00:09:47.263 06:01:55 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:47.522 [2024-05-13 06:01:55.705662] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.523 [2024-05-13 06:01:55.705724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.523 [2024-05-13 06:01:55.705728] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.523 [2024-05-13 06:01:55.705734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.523 [2024-05-13 06:01:55.705737] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.523 [2024-05-13 06:01:55.705743] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.523 [2024-05-13 06:01:55.705745] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.523 [2024-05-13 06:01:55.705750] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.523 06:01:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.782 06:01:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:47.782 "name": "Existed_Raid", 00:09:47.782 "uuid": "4d4ef845-10ee-11ef-ba60-3508ead7bdda", 00:09:47.782 "strip_size_kb": 64, 00:09:47.782 "state": "configuring", 00:09:47.782 "raid_level": "concat", 00:09:47.782 "superblock": true, 00:09:47.782 "num_base_bdevs": 4, 00:09:47.782 "num_base_bdevs_discovered": 0, 00:09:47.782 "num_base_bdevs_operational": 4, 00:09:47.782 "base_bdevs_list": [ 00:09:47.782 { 00:09:47.782 "name": "BaseBdev1", 00:09:47.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.782 "is_configured": false, 00:09:47.782 "data_offset": 0, 00:09:47.782 "data_size": 0 00:09:47.782 }, 00:09:47.782 { 00:09:47.782 "name": "BaseBdev2", 00:09:47.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.782 "is_configured": false, 00:09:47.782 "data_offset": 0, 00:09:47.782 "data_size": 0 00:09:47.782 }, 00:09:47.782 { 00:09:47.782 "name": "BaseBdev3", 00:09:47.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.782 "is_configured": false, 00:09:47.782 "data_offset": 0, 00:09:47.782 "data_size": 0 00:09:47.782 }, 00:09:47.782 { 00:09:47.782 "name": "BaseBdev4", 00:09:47.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.782 "is_configured": false, 00:09:47.782 "data_offset": 0, 00:09:47.782 "data_size": 0 00:09:47.782 } 00:09:47.782 ] 00:09:47.782 }' 00:09:47.782 06:01:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:47.782 06:01:55 -- common/autotest_common.sh@10 -- # set +x 00:09:48.041 06:01:56 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:48.041 [2024-05-13 06:01:56.317724] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.041 [2024-05-13 06:01:56.317740] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c542500 name Existed_Raid, state configuring 00:09:48.041 06:01:56 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:48.300 [2024-05-13 06:01:56.493754] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.300 [2024-05-13 06:01:56.493787] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.300 [2024-05-13 06:01:56.493790] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.300 [2024-05-13 06:01:56.493796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.300 [2024-05-13 06:01:56.493798] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.300 [2024-05-13 06:01:56.493803] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.300 [2024-05-13 06:01:56.493806] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:48.300 [2024-05-13 06:01:56.493828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:48.300 06:01:56 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.558 [2024-05-13 06:01:56.670526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.558 BaseBdev1 00:09:48.558 06:01:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:48.558 06:01:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:48.558 06:01:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:48.559 06:01:56 -- common/autotest_common.sh@889 -- # local i 00:09:48.559 06:01:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:48.559 06:01:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:48.559 06:01:56 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:48.559 06:01:56 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.818 [ 00:09:48.818 { 00:09:48.818 "name": "BaseBdev1", 00:09:48.818 "aliases": [ 00:09:48.818 "4de2152a-10ee-11ef-ba60-3508ead7bdda" 00:09:48.818 ], 00:09:48.818 "product_name": "Malloc disk", 00:09:48.818 "block_size": 512, 00:09:48.818 "num_blocks": 65536, 00:09:48.818 "uuid": "4de2152a-10ee-11ef-ba60-3508ead7bdda", 00:09:48.818 "assigned_rate_limits": { 00:09:48.818 "rw_ios_per_sec": 0, 00:09:48.818 "rw_mbytes_per_sec": 0, 00:09:48.818 "r_mbytes_per_sec": 0, 00:09:48.818 "w_mbytes_per_sec": 0 00:09:48.818 }, 00:09:48.818 "claimed": true, 00:09:48.818 "claim_type": "exclusive_write", 00:09:48.818 "zoned": false, 00:09:48.818 "supported_io_types": { 00:09:48.818 "read": true, 00:09:48.818 "write": true, 00:09:48.818 "unmap": true, 00:09:48.818 "write_zeroes": true, 00:09:48.818 "flush": true, 00:09:48.818 "reset": true, 00:09:48.818 "compare": false, 00:09:48.818 "compare_and_write": false, 00:09:48.818 "abort": true, 00:09:48.818 "nvme_admin": false, 00:09:48.818 "nvme_io": false 00:09:48.818 }, 00:09:48.818 "memory_domains": [ 00:09:48.818 { 00:09:48.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.818 "dma_device_type": 2 00:09:48.818 } 00:09:48.818 ], 00:09:48.818 "driver_specific": {} 00:09:48.818 } 00:09:48.818 ] 00:09:48.818 06:01:57 -- common/autotest_common.sh@895 -- # return 0 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.818 06:01:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.077 06:01:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:49.077 "name": "Existed_Raid", 00:09:49.077 "uuid": "4dc73939-10ee-11ef-ba60-3508ead7bdda", 00:09:49.077 "strip_size_kb": 64, 00:09:49.077 "state": "configuring", 00:09:49.077 "raid_level": "concat", 00:09:49.077 "superblock": true, 00:09:49.077 "num_base_bdevs": 4, 00:09:49.077 "num_base_bdevs_discovered": 1, 00:09:49.077 "num_base_bdevs_operational": 4, 00:09:49.077 "base_bdevs_list": [ 00:09:49.077 { 00:09:49.077 "name": "BaseBdev1", 00:09:49.077 "uuid": "4de2152a-10ee-11ef-ba60-3508ead7bdda", 00:09:49.077 "is_configured": true, 00:09:49.077 "data_offset": 2048, 00:09:49.077 "data_size": 63488 00:09:49.077 }, 00:09:49.077 { 00:09:49.077 "name": "BaseBdev2", 00:09:49.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.077 "is_configured": false, 00:09:49.077 "data_offset": 0, 00:09:49.077 "data_size": 0 00:09:49.077 }, 00:09:49.077 { 00:09:49.077 "name": "BaseBdev3", 00:09:49.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.077 "is_configured": false, 00:09:49.077 "data_offset": 0, 00:09:49.077 "data_size": 0 00:09:49.077 }, 00:09:49.077 { 00:09:49.077 "name": "BaseBdev4", 00:09:49.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.077 "is_configured": false, 00:09:49.077 "data_offset": 0, 00:09:49.077 "data_size": 0 00:09:49.077 } 00:09:49.077 ] 00:09:49.077 }' 00:09:49.077 06:01:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:49.077 06:01:57 -- common/autotest_common.sh@10 -- # set +x 00:09:49.336 06:01:57 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:49.336 [2024-05-13 06:01:57.621909] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:49.336 [2024-05-13 06:01:57.621930] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c542500 name Existed_Raid, state configuring 00:09:49.336 06:01:57 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:49.336 06:01:57 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:49.595 06:01:57 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:49.854 BaseBdev1 00:09:49.854 06:01:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:49.854 06:01:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:49.854 06:01:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:49.854 06:01:57 -- common/autotest_common.sh@889 -- # local i 00:09:49.854 06:01:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:49.854 06:01:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:49.854 06:01:57 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:49.855 06:01:58 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.114 [ 00:09:50.114 { 00:09:50.114 "name": "BaseBdev1", 00:09:50.114 "aliases": [ 00:09:50.114 "4ea61f46-10ee-11ef-ba60-3508ead7bdda" 00:09:50.114 ], 00:09:50.114 "product_name": "Malloc disk", 00:09:50.114 "block_size": 512, 00:09:50.114 "num_blocks": 65536, 00:09:50.114 "uuid": "4ea61f46-10ee-11ef-ba60-3508ead7bdda", 00:09:50.114 "assigned_rate_limits": { 00:09:50.114 "rw_ios_per_sec": 0, 00:09:50.114 "rw_mbytes_per_sec": 0, 00:09:50.114 "r_mbytes_per_sec": 0, 00:09:50.114 "w_mbytes_per_sec": 0 00:09:50.114 }, 00:09:50.114 "claimed": false, 00:09:50.114 "zoned": false, 00:09:50.114 "supported_io_types": { 00:09:50.114 "read": true, 00:09:50.114 "write": true, 00:09:50.114 "unmap": true, 00:09:50.114 "write_zeroes": true, 00:09:50.114 "flush": true, 00:09:50.114 "reset": true, 00:09:50.114 "compare": false, 00:09:50.114 "compare_and_write": false, 00:09:50.114 "abort": true, 00:09:50.114 "nvme_admin": false, 00:09:50.114 "nvme_io": false 00:09:50.114 }, 00:09:50.114 "memory_domains": [ 00:09:50.114 { 00:09:50.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.114 "dma_device_type": 2 00:09:50.114 } 00:09:50.114 ], 00:09:50.114 "driver_specific": {} 00:09:50.114 } 00:09:50.114 ] 00:09:50.114 06:01:58 -- common/autotest_common.sh@895 -- # return 0 00:09:50.114 06:01:58 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:09:50.384 [2024-05-13 06:01:58.446618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.384 [2024-05-13 06:01:58.447030] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.384 [2024-05-13 06:01:58.447079] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.384 [2024-05-13 06:01:58.447094] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.384 [2024-05-13 06:01:58.447117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.384 [2024-05-13 06:01:58.447121] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:50.384 [2024-05-13 06:01:58.447126] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:50.384 "name": "Existed_Raid", 00:09:50.384 "uuid": "4ef134e6-10ee-11ef-ba60-3508ead7bdda", 00:09:50.384 "strip_size_kb": 64, 00:09:50.384 "state": "configuring", 00:09:50.384 "raid_level": "concat", 00:09:50.384 "superblock": true, 00:09:50.384 "num_base_bdevs": 4, 00:09:50.384 "num_base_bdevs_discovered": 1, 00:09:50.384 "num_base_bdevs_operational": 4, 00:09:50.384 "base_bdevs_list": [ 00:09:50.384 { 00:09:50.384 "name": "BaseBdev1", 00:09:50.384 "uuid": "4ea61f46-10ee-11ef-ba60-3508ead7bdda", 00:09:50.384 "is_configured": true, 00:09:50.384 "data_offset": 2048, 00:09:50.384 "data_size": 63488 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "name": "BaseBdev2", 00:09:50.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.384 "is_configured": false, 00:09:50.384 "data_offset": 0, 00:09:50.384 "data_size": 0 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "name": "BaseBdev3", 00:09:50.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.384 "is_configured": false, 00:09:50.384 "data_offset": 0, 00:09:50.384 "data_size": 0 00:09:50.384 }, 00:09:50.384 { 00:09:50.384 "name": "BaseBdev4", 00:09:50.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.384 "is_configured": false, 00:09:50.384 "data_offset": 0, 00:09:50.384 "data_size": 0 00:09:50.384 } 00:09:50.384 ] 00:09:50.384 }' 00:09:50.384 06:01:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:50.384 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:50.657 06:01:58 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.916 [2024-05-13 06:01:59.034801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.916 BaseBdev2 00:09:50.916 06:01:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:50.916 06:01:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:50.916 06:01:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:50.916 06:01:59 -- common/autotest_common.sh@889 -- # local i 00:09:50.916 06:01:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:50.916 06:01:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:50.916 06:01:59 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:50.916 06:01:59 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.176 [ 00:09:51.176 { 00:09:51.176 "name": "BaseBdev2", 00:09:51.176 "aliases": [ 00:09:51.176 "4f4af0a4-10ee-11ef-ba60-3508ead7bdda" 00:09:51.176 ], 00:09:51.176 "product_name": "Malloc disk", 00:09:51.176 "block_size": 512, 00:09:51.176 "num_blocks": 65536, 00:09:51.176 "uuid": "4f4af0a4-10ee-11ef-ba60-3508ead7bdda", 00:09:51.176 "assigned_rate_limits": { 00:09:51.176 "rw_ios_per_sec": 0, 00:09:51.176 "rw_mbytes_per_sec": 0, 00:09:51.176 "r_mbytes_per_sec": 0, 00:09:51.176 "w_mbytes_per_sec": 0 00:09:51.176 }, 00:09:51.176 "claimed": true, 00:09:51.176 "claim_type": "exclusive_write", 00:09:51.176 "zoned": false, 00:09:51.176 "supported_io_types": { 00:09:51.176 "read": true, 00:09:51.176 "write": true, 00:09:51.176 "unmap": true, 00:09:51.176 "write_zeroes": true, 00:09:51.176 "flush": true, 00:09:51.176 "reset": true, 00:09:51.176 "compare": false, 00:09:51.176 "compare_and_write": false, 00:09:51.176 "abort": true, 00:09:51.176 "nvme_admin": false, 00:09:51.176 "nvme_io": false 00:09:51.176 }, 00:09:51.176 "memory_domains": [ 00:09:51.176 { 00:09:51.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.176 "dma_device_type": 2 00:09:51.176 } 00:09:51.176 ], 00:09:51.176 "driver_specific": {} 00:09:51.176 } 00:09:51.176 ] 00:09:51.176 06:01:59 -- common/autotest_common.sh@895 -- # return 0 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.176 06:01:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.435 06:01:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:51.435 "name": "Existed_Raid", 00:09:51.435 "uuid": "4ef134e6-10ee-11ef-ba60-3508ead7bdda", 00:09:51.435 "strip_size_kb": 64, 00:09:51.435 "state": "configuring", 00:09:51.435 "raid_level": "concat", 00:09:51.435 "superblock": true, 00:09:51.435 "num_base_bdevs": 4, 00:09:51.435 "num_base_bdevs_discovered": 2, 00:09:51.435 "num_base_bdevs_operational": 4, 00:09:51.435 "base_bdevs_list": [ 00:09:51.435 { 00:09:51.435 "name": "BaseBdev1", 00:09:51.435 "uuid": "4ea61f46-10ee-11ef-ba60-3508ead7bdda", 00:09:51.435 "is_configured": true, 00:09:51.435 "data_offset": 2048, 00:09:51.435 "data_size": 63488 00:09:51.435 }, 00:09:51.435 { 00:09:51.435 "name": "BaseBdev2", 00:09:51.435 "uuid": "4f4af0a4-10ee-11ef-ba60-3508ead7bdda", 00:09:51.435 "is_configured": true, 00:09:51.435 "data_offset": 2048, 00:09:51.435 "data_size": 63488 00:09:51.435 }, 00:09:51.435 { 00:09:51.435 "name": "BaseBdev3", 00:09:51.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.435 "is_configured": false, 00:09:51.435 "data_offset": 0, 00:09:51.435 "data_size": 0 00:09:51.435 }, 00:09:51.435 { 00:09:51.435 "name": "BaseBdev4", 00:09:51.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.435 "is_configured": false, 00:09:51.435 "data_offset": 0, 00:09:51.435 "data_size": 0 00:09:51.435 } 00:09:51.435 ] 00:09:51.435 }' 00:09:51.435 06:01:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:51.435 06:01:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.694 06:01:59 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.694 [2024-05-13 06:01:59.990901] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.694 BaseBdev3 00:09:51.694 06:02:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:09:51.694 06:02:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:09:51.694 06:02:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:51.694 06:02:00 -- common/autotest_common.sh@889 -- # local i 00:09:51.694 06:02:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:51.694 06:02:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:51.694 06:02:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:51.953 06:02:00 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.213 [ 00:09:52.213 { 00:09:52.213 "name": "BaseBdev3", 00:09:52.213 "aliases": [ 00:09:52.213 "4fdcd625-10ee-11ef-ba60-3508ead7bdda" 00:09:52.213 ], 00:09:52.213 "product_name": "Malloc disk", 00:09:52.213 "block_size": 512, 00:09:52.213 "num_blocks": 65536, 00:09:52.213 "uuid": "4fdcd625-10ee-11ef-ba60-3508ead7bdda", 00:09:52.213 "assigned_rate_limits": { 00:09:52.213 "rw_ios_per_sec": 0, 00:09:52.213 "rw_mbytes_per_sec": 0, 00:09:52.213 "r_mbytes_per_sec": 0, 00:09:52.213 "w_mbytes_per_sec": 0 00:09:52.213 }, 00:09:52.213 "claimed": true, 00:09:52.213 "claim_type": "exclusive_write", 00:09:52.213 "zoned": false, 00:09:52.213 "supported_io_types": { 00:09:52.213 "read": true, 00:09:52.213 "write": true, 00:09:52.213 "unmap": true, 00:09:52.213 "write_zeroes": true, 00:09:52.213 "flush": true, 00:09:52.213 "reset": true, 00:09:52.213 "compare": false, 00:09:52.213 "compare_and_write": false, 00:09:52.213 "abort": true, 00:09:52.213 "nvme_admin": false, 00:09:52.213 "nvme_io": false 00:09:52.213 }, 00:09:52.213 "memory_domains": [ 00:09:52.213 { 00:09:52.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.213 "dma_device_type": 2 00:09:52.213 } 00:09:52.213 ], 00:09:52.213 "driver_specific": {} 00:09:52.213 } 00:09:52.213 ] 00:09:52.213 06:02:00 -- common/autotest_common.sh@895 -- # return 0 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.213 06:02:00 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.472 06:02:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:52.472 "name": "Existed_Raid", 00:09:52.472 "uuid": "4ef134e6-10ee-11ef-ba60-3508ead7bdda", 00:09:52.472 "strip_size_kb": 64, 00:09:52.472 "state": "configuring", 00:09:52.472 "raid_level": "concat", 00:09:52.472 "superblock": true, 00:09:52.472 "num_base_bdevs": 4, 00:09:52.472 "num_base_bdevs_discovered": 3, 00:09:52.472 "num_base_bdevs_operational": 4, 00:09:52.472 "base_bdevs_list": [ 00:09:52.472 { 00:09:52.472 "name": "BaseBdev1", 00:09:52.472 "uuid": "4ea61f46-10ee-11ef-ba60-3508ead7bdda", 00:09:52.472 "is_configured": true, 00:09:52.472 "data_offset": 2048, 00:09:52.472 "data_size": 63488 00:09:52.472 }, 00:09:52.472 { 00:09:52.472 "name": "BaseBdev2", 00:09:52.472 "uuid": "4f4af0a4-10ee-11ef-ba60-3508ead7bdda", 00:09:52.472 "is_configured": true, 00:09:52.472 "data_offset": 2048, 00:09:52.472 "data_size": 63488 00:09:52.472 }, 00:09:52.472 { 00:09:52.472 "name": "BaseBdev3", 00:09:52.472 "uuid": "4fdcd625-10ee-11ef-ba60-3508ead7bdda", 00:09:52.472 "is_configured": true, 00:09:52.472 "data_offset": 2048, 00:09:52.472 "data_size": 63488 00:09:52.472 }, 00:09:52.472 { 00:09:52.472 "name": "BaseBdev4", 00:09:52.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.472 "is_configured": false, 00:09:52.472 "data_offset": 0, 00:09:52.472 "data_size": 0 00:09:52.472 } 00:09:52.472 ] 00:09:52.472 }' 00:09:52.472 06:02:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:52.472 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:09:52.731 06:02:00 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:09:52.731 [2024-05-13 06:02:00.951026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:52.731 [2024-05-13 06:02:00.951095] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c542a00 00:09:52.731 [2024-05-13 06:02:00.951100] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:52.731 [2024-05-13 06:02:00.951115] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c5a5ec0 00:09:52.731 [2024-05-13 06:02:00.951151] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c542a00 00:09:52.731 [2024-05-13 06:02:00.951154] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82c542a00 00:09:52.731 [2024-05-13 06:02:00.951168] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.731 BaseBdev4 00:09:52.731 06:02:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:09:52.731 06:02:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:09:52.731 06:02:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:52.731 06:02:00 -- common/autotest_common.sh@889 -- # local i 00:09:52.731 06:02:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:52.731 06:02:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:52.731 06:02:00 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:52.990 06:02:01 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:53.249 [ 00:09:53.249 { 00:09:53.249 "name": "BaseBdev4", 00:09:53.249 "aliases": [ 00:09:53.249 "506f56d3-10ee-11ef-ba60-3508ead7bdda" 00:09:53.249 ], 00:09:53.249 "product_name": "Malloc disk", 00:09:53.249 "block_size": 512, 00:09:53.249 "num_blocks": 65536, 00:09:53.249 "uuid": "506f56d3-10ee-11ef-ba60-3508ead7bdda", 00:09:53.249 "assigned_rate_limits": { 00:09:53.249 "rw_ios_per_sec": 0, 00:09:53.249 "rw_mbytes_per_sec": 0, 00:09:53.249 "r_mbytes_per_sec": 0, 00:09:53.249 "w_mbytes_per_sec": 0 00:09:53.249 }, 00:09:53.249 "claimed": true, 00:09:53.249 "claim_type": "exclusive_write", 00:09:53.249 "zoned": false, 00:09:53.249 "supported_io_types": { 00:09:53.249 "read": true, 00:09:53.249 "write": true, 00:09:53.249 "unmap": true, 00:09:53.249 "write_zeroes": true, 00:09:53.249 "flush": true, 00:09:53.249 "reset": true, 00:09:53.249 "compare": false, 00:09:53.249 "compare_and_write": false, 00:09:53.249 "abort": true, 00:09:53.249 "nvme_admin": false, 00:09:53.249 "nvme_io": false 00:09:53.249 }, 00:09:53.249 "memory_domains": [ 00:09:53.249 { 00:09:53.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.249 "dma_device_type": 2 00:09:53.249 } 00:09:53.249 ], 00:09:53.249 "driver_specific": {} 00:09:53.249 } 00:09:53.249 ] 00:09:53.249 06:02:01 -- common/autotest_common.sh@895 -- # return 0 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:53.249 "name": "Existed_Raid", 00:09:53.249 "uuid": "4ef134e6-10ee-11ef-ba60-3508ead7bdda", 00:09:53.249 "strip_size_kb": 64, 00:09:53.249 "state": "online", 00:09:53.249 "raid_level": "concat", 00:09:53.249 "superblock": true, 00:09:53.249 "num_base_bdevs": 4, 00:09:53.249 "num_base_bdevs_discovered": 4, 00:09:53.249 "num_base_bdevs_operational": 4, 00:09:53.249 "base_bdevs_list": [ 00:09:53.249 { 00:09:53.249 "name": "BaseBdev1", 00:09:53.249 "uuid": "4ea61f46-10ee-11ef-ba60-3508ead7bdda", 00:09:53.249 "is_configured": true, 00:09:53.249 "data_offset": 2048, 00:09:53.249 "data_size": 63488 00:09:53.249 }, 00:09:53.249 { 00:09:53.249 "name": "BaseBdev2", 00:09:53.249 "uuid": "4f4af0a4-10ee-11ef-ba60-3508ead7bdda", 00:09:53.249 "is_configured": true, 00:09:53.249 "data_offset": 2048, 00:09:53.249 "data_size": 63488 00:09:53.249 }, 00:09:53.249 { 00:09:53.249 "name": "BaseBdev3", 00:09:53.249 "uuid": "4fdcd625-10ee-11ef-ba60-3508ead7bdda", 00:09:53.249 "is_configured": true, 00:09:53.249 "data_offset": 2048, 00:09:53.249 "data_size": 63488 00:09:53.249 }, 00:09:53.249 { 00:09:53.249 "name": "BaseBdev4", 00:09:53.249 "uuid": "506f56d3-10ee-11ef-ba60-3508ead7bdda", 00:09:53.249 "is_configured": true, 00:09:53.249 "data_offset": 2048, 00:09:53.249 "data_size": 63488 00:09:53.249 } 00:09:53.249 ] 00:09:53.249 }' 00:09:53.249 06:02:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:53.249 06:02:01 -- common/autotest_common.sh@10 -- # set +x 00:09:53.508 06:02:01 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:53.768 [2024-05-13 06:02:01.911086] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.768 [2024-05-13 06:02:01.911104] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.768 [2024-05-13 06:02:01.911120] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.768 06:02:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.027 06:02:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:54.027 "name": "Existed_Raid", 00:09:54.027 "uuid": "4ef134e6-10ee-11ef-ba60-3508ead7bdda", 00:09:54.027 "strip_size_kb": 64, 00:09:54.027 "state": "offline", 00:09:54.027 "raid_level": "concat", 00:09:54.027 "superblock": true, 00:09:54.027 "num_base_bdevs": 4, 00:09:54.027 "num_base_bdevs_discovered": 3, 00:09:54.027 "num_base_bdevs_operational": 3, 00:09:54.027 "base_bdevs_list": [ 00:09:54.027 { 00:09:54.027 "name": null, 00:09:54.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.027 "is_configured": false, 00:09:54.027 "data_offset": 2048, 00:09:54.027 "data_size": 63488 00:09:54.027 }, 00:09:54.027 { 00:09:54.027 "name": "BaseBdev2", 00:09:54.027 "uuid": "4f4af0a4-10ee-11ef-ba60-3508ead7bdda", 00:09:54.027 "is_configured": true, 00:09:54.027 "data_offset": 2048, 00:09:54.027 "data_size": 63488 00:09:54.027 }, 00:09:54.027 { 00:09:54.027 "name": "BaseBdev3", 00:09:54.027 "uuid": "4fdcd625-10ee-11ef-ba60-3508ead7bdda", 00:09:54.027 "is_configured": true, 00:09:54.027 "data_offset": 2048, 00:09:54.027 "data_size": 63488 00:09:54.027 }, 00:09:54.027 { 00:09:54.027 "name": "BaseBdev4", 00:09:54.027 "uuid": "506f56d3-10ee-11ef-ba60-3508ead7bdda", 00:09:54.027 "is_configured": true, 00:09:54.027 "data_offset": 2048, 00:09:54.027 "data_size": 63488 00:09:54.027 } 00:09:54.027 ] 00:09:54.027 }' 00:09:54.027 06:02:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:54.027 06:02:02 -- common/autotest_common.sh@10 -- # set +x 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.286 06:02:02 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:54.545 [2024-05-13 06:02:02.667857] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:54.545 06:02:02 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:54.803 [2024-05-13 06:02:03.020572] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:54.803 06:02:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:54.803 06:02:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:54.803 06:02:03 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.803 06:02:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:55.061 06:02:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:55.061 06:02:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:55.061 06:02:03 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:09:55.061 [2024-05-13 06:02:03.369249] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:55.061 [2024-05-13 06:02:03.369269] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c542a00 name Existed_Raid, state offline 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:55.320 06:02:03 -- bdev/bdev_raid.sh@287 -- # killprocess 52434 00:09:55.320 06:02:03 -- common/autotest_common.sh@926 -- # '[' -z 52434 ']' 00:09:55.320 06:02:03 -- common/autotest_common.sh@930 -- # kill -0 52434 00:09:55.320 06:02:03 -- common/autotest_common.sh@931 -- # uname 00:09:55.320 06:02:03 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:09:55.320 06:02:03 -- common/autotest_common.sh@934 -- # ps -c -o command 52434 00:09:55.320 06:02:03 -- common/autotest_common.sh@934 -- # tail -1 00:09:55.320 06:02:03 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:09:55.320 06:02:03 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:09:55.320 killing process with pid 52434 00:09:55.320 06:02:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52434' 00:09:55.320 06:02:03 -- common/autotest_common.sh@945 -- # kill 52434 00:09:55.320 [2024-05-13 06:02:03.573659] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.320 [2024-05-13 06:02:03.573692] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.320 06:02:03 -- common/autotest_common.sh@950 -- # wait 52434 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:55.579 00:09:55.579 real 0m9.071s 00:09:55.579 user 0m15.756s 00:09:55.579 sys 0m1.677s 00:09:55.579 06:02:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.579 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 ************************************ 00:09:55.579 END TEST raid_state_function_test_sb 00:09:55.579 ************************************ 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:55.579 06:02:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:55.579 06:02:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.579 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 ************************************ 00:09:55.579 START TEST raid_superblock_test 00:09:55.579 ************************************ 00:09:55.579 06:02:03 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@357 -- # raid_pid=52707 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@358 -- # waitforlisten 52707 /var/tmp/spdk-raid.sock 00:09:55.579 06:02:03 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:55.579 06:02:03 -- common/autotest_common.sh@819 -- # '[' -z 52707 ']' 00:09:55.579 06:02:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:55.579 06:02:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:55.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:55.579 06:02:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:55.579 06:02:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:55.579 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 [2024-05-13 06:02:03.779439] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:55.579 [2024-05-13 06:02:03.779779] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:09:56.146 EAL: TSC is not safe to use in SMP mode 00:09:56.146 EAL: TSC is not invariant 00:09:56.146 [2024-05-13 06:02:04.198167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.146 [2024-05-13 06:02:04.282705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.146 [2024-05-13 06:02:04.283131] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.146 [2024-05-13 06:02:04.283144] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.406 06:02:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:56.406 06:02:04 -- common/autotest_common.sh@852 -- # return 0 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.406 06:02:04 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:56.665 malloc1 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:56.665 [2024-05-13 06:02:04.962212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:56.665 [2024-05-13 06:02:04.962256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.665 [2024-05-13 06:02:04.962761] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c103780 00:09:56.665 [2024-05-13 06:02:04.962789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.665 [2024-05-13 06:02:04.963441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.665 [2024-05-13 06:02:04.963478] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:56.665 pt1 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.665 06:02:04 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:56.925 malloc2 00:09:56.925 06:02:05 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.184 [2024-05-13 06:02:05.314253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.184 [2024-05-13 06:02:05.314313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.184 [2024-05-13 06:02:05.314336] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c103c80 00:09:57.184 [2024-05-13 06:02:05.314342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.184 [2024-05-13 06:02:05.314750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.184 [2024-05-13 06:02:05.314781] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.184 pt2 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.184 06:02:05 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:57.184 malloc3 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:57.443 [2024-05-13 06:02:05.658293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:57.443 [2024-05-13 06:02:05.658331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.443 [2024-05-13 06:02:05.658352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c104180 00:09:57.443 [2024-05-13 06:02:05.658358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.443 [2024-05-13 06:02:05.658789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.443 [2024-05-13 06:02:05.658818] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:57.443 pt3 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:57.443 06:02:05 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:09:57.701 malloc4 00:09:57.701 06:02:05 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:57.701 [2024-05-13 06:02:06.006334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:57.701 [2024-05-13 06:02:06.006397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.701 [2024-05-13 06:02:06.006431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c104680 00:09:57.701 [2024-05-13 06:02:06.006438] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.701 [2024-05-13 06:02:06.006860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.701 [2024-05-13 06:02:06.006890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:57.701 pt4 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:09:57.959 [2024-05-13 06:02:06.154358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.959 [2024-05-13 06:02:06.154729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.959 [2024-05-13 06:02:06.154752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:57.959 [2024-05-13 06:02:06.154760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:57.959 [2024-05-13 06:02:06.154822] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c104900 00:09:57.959 [2024-05-13 06:02:06.154828] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:57.959 [2024-05-13 06:02:06.154853] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c166e20 00:09:57.959 [2024-05-13 06:02:06.154904] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c104900 00:09:57.959 [2024-05-13 06:02:06.154910] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c104900 00:09:57.959 [2024-05-13 06:02:06.154926] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.959 06:02:06 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.217 06:02:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:58.217 "name": "raid_bdev1", 00:09:58.217 "uuid": "5389507a-10ee-11ef-ba60-3508ead7bdda", 00:09:58.217 "strip_size_kb": 64, 00:09:58.217 "state": "online", 00:09:58.217 "raid_level": "concat", 00:09:58.217 "superblock": true, 00:09:58.217 "num_base_bdevs": 4, 00:09:58.217 "num_base_bdevs_discovered": 4, 00:09:58.217 "num_base_bdevs_operational": 4, 00:09:58.217 "base_bdevs_list": [ 00:09:58.217 { 00:09:58.217 "name": "pt1", 00:09:58.217 "uuid": "97991143-0d6e-1754-9005-87882f00ddf3", 00:09:58.217 "is_configured": true, 00:09:58.217 "data_offset": 2048, 00:09:58.217 "data_size": 63488 00:09:58.217 }, 00:09:58.217 { 00:09:58.217 "name": "pt2", 00:09:58.217 "uuid": "2b9601b0-5b52-0356-a80f-5a51e00370ef", 00:09:58.217 "is_configured": true, 00:09:58.217 "data_offset": 2048, 00:09:58.217 "data_size": 63488 00:09:58.217 }, 00:09:58.217 { 00:09:58.217 "name": "pt3", 00:09:58.217 "uuid": "653246b3-8314-b35c-97ba-8781a588a613", 00:09:58.217 "is_configured": true, 00:09:58.217 "data_offset": 2048, 00:09:58.217 "data_size": 63488 00:09:58.217 }, 00:09:58.217 { 00:09:58.217 "name": "pt4", 00:09:58.217 "uuid": "d04754b1-8964-f256-a86b-a5ba0141b004", 00:09:58.217 "is_configured": true, 00:09:58.217 "data_offset": 2048, 00:09:58.217 "data_size": 63488 00:09:58.217 } 00:09:58.217 ] 00:09:58.217 }' 00:09:58.217 06:02:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:58.217 06:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:58.476 06:02:06 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:58.476 06:02:06 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:58.476 [2024-05-13 06:02:06.742449] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.476 06:02:06 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5389507a-10ee-11ef-ba60-3508ead7bdda 00:09:58.476 06:02:06 -- bdev/bdev_raid.sh@380 -- # '[' -z 5389507a-10ee-11ef-ba60-3508ead7bdda ']' 00:09:58.476 06:02:06 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:58.734 [2024-05-13 06:02:06.918440] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.734 [2024-05-13 06:02:06.918456] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.734 [2024-05-13 06:02:06.918467] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.734 [2024-05-13 06:02:06.918495] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.734 [2024-05-13 06:02:06.918499] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c104900 name raid_bdev1, state offline 00:09:58.734 06:02:06 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.734 06:02:06 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:58.994 06:02:07 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:58.994 06:02:07 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:58.994 06:02:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.994 06:02:07 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:58.994 06:02:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:58.994 06:02:07 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:59.253 06:02:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:59.253 06:02:07 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:59.512 06:02:07 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:59.512 06:02:07 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:09:59.512 06:02:07 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:59.512 06:02:07 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:59.772 06:02:07 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:59.772 06:02:07 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:59.772 06:02:07 -- common/autotest_common.sh@640 -- # local es=0 00:09:59.772 06:02:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:59.772 06:02:07 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.772 06:02:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.772 06:02:07 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.772 06:02:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.772 06:02:07 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.772 06:02:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:59.772 06:02:07 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.772 06:02:07 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:59.772 06:02:07 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:09:59.772 [2024-05-13 06:02:08.086587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:59.772 [2024-05-13 06:02:08.087057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:59.772 [2024-05-13 06:02:08.087078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:59.772 [2024-05-13 06:02:08.087085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:59.772 [2024-05-13 06:02:08.087097] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:59.772 [2024-05-13 06:02:08.087131] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:59.772 [2024-05-13 06:02:08.087140] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:09:59.772 [2024-05-13 06:02:08.087148] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:09:59.772 [2024-05-13 06:02:08.087155] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.772 [2024-05-13 06:02:08.087158] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c104680 name raid_bdev1, state configuring 00:10:00.031 request: 00:10:00.031 { 00:10:00.031 "name": "raid_bdev1", 00:10:00.031 "raid_level": "concat", 00:10:00.031 "base_bdevs": [ 00:10:00.031 "malloc1", 00:10:00.031 "malloc2", 00:10:00.031 "malloc3", 00:10:00.031 "malloc4" 00:10:00.031 ], 00:10:00.031 "superblock": false, 00:10:00.031 "strip_size_kb": 64, 00:10:00.031 "method": "bdev_raid_create", 00:10:00.031 "req_id": 1 00:10:00.031 } 00:10:00.031 Got JSON-RPC error response 00:10:00.031 response: 00:10:00.031 { 00:10:00.031 "code": -17, 00:10:00.031 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:00.031 } 00:10:00.031 06:02:08 -- common/autotest_common.sh@643 -- # es=1 00:10:00.031 06:02:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:00.031 06:02:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:00.031 06:02:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:00.031 06:02:08 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.031 06:02:08 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:00.031 06:02:08 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:00.031 06:02:08 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:00.031 06:02:08 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.291 [2024-05-13 06:02:08.446626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.291 [2024-05-13 06:02:08.446665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.291 [2024-05-13 06:02:08.446689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c104180 00:10:00.291 [2024-05-13 06:02:08.446695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.291 [2024-05-13 06:02:08.447187] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.291 [2024-05-13 06:02:08.447221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.291 [2024-05-13 06:02:08.447239] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:00.291 [2024-05-13 06:02:08.447248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.291 pt1 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.291 06:02:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.550 06:02:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:00.550 "name": "raid_bdev1", 00:10:00.550 "uuid": "5389507a-10ee-11ef-ba60-3508ead7bdda", 00:10:00.550 "strip_size_kb": 64, 00:10:00.550 "state": "configuring", 00:10:00.550 "raid_level": "concat", 00:10:00.550 "superblock": true, 00:10:00.550 "num_base_bdevs": 4, 00:10:00.550 "num_base_bdevs_discovered": 1, 00:10:00.550 "num_base_bdevs_operational": 4, 00:10:00.550 "base_bdevs_list": [ 00:10:00.550 { 00:10:00.550 "name": "pt1", 00:10:00.550 "uuid": "97991143-0d6e-1754-9005-87882f00ddf3", 00:10:00.551 "is_configured": true, 00:10:00.551 "data_offset": 2048, 00:10:00.551 "data_size": 63488 00:10:00.551 }, 00:10:00.551 { 00:10:00.551 "name": null, 00:10:00.551 "uuid": "2b9601b0-5b52-0356-a80f-5a51e00370ef", 00:10:00.551 "is_configured": false, 00:10:00.551 "data_offset": 2048, 00:10:00.551 "data_size": 63488 00:10:00.551 }, 00:10:00.551 { 00:10:00.551 "name": null, 00:10:00.551 "uuid": "653246b3-8314-b35c-97ba-8781a588a613", 00:10:00.551 "is_configured": false, 00:10:00.551 "data_offset": 2048, 00:10:00.551 "data_size": 63488 00:10:00.551 }, 00:10:00.551 { 00:10:00.551 "name": null, 00:10:00.551 "uuid": "d04754b1-8964-f256-a86b-a5ba0141b004", 00:10:00.551 "is_configured": false, 00:10:00.551 "data_offset": 2048, 00:10:00.551 "data_size": 63488 00:10:00.551 } 00:10:00.551 ] 00:10:00.551 }' 00:10:00.551 06:02:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:00.551 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:10:00.811 06:02:08 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:10:00.811 06:02:08 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:00.811 [2024-05-13 06:02:09.054697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:00.811 [2024-05-13 06:02:09.054738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.811 [2024-05-13 06:02:09.054779] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c103780 00:10:00.811 [2024-05-13 06:02:09.054786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.811 [2024-05-13 06:02:09.054866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.811 [2024-05-13 06:02:09.054873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:00.811 [2024-05-13 06:02:09.054888] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:00.811 [2024-05-13 06:02:09.054910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.811 pt2 00:10:00.811 06:02:09 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:01.070 [2024-05-13 06:02:09.226715] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.070 06:02:09 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.330 06:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:01.330 "name": "raid_bdev1", 00:10:01.330 "uuid": "5389507a-10ee-11ef-ba60-3508ead7bdda", 00:10:01.330 "strip_size_kb": 64, 00:10:01.330 "state": "configuring", 00:10:01.330 "raid_level": "concat", 00:10:01.330 "superblock": true, 00:10:01.330 "num_base_bdevs": 4, 00:10:01.330 "num_base_bdevs_discovered": 1, 00:10:01.330 "num_base_bdevs_operational": 4, 00:10:01.330 "base_bdevs_list": [ 00:10:01.330 { 00:10:01.330 "name": "pt1", 00:10:01.330 "uuid": "97991143-0d6e-1754-9005-87882f00ddf3", 00:10:01.330 "is_configured": true, 00:10:01.330 "data_offset": 2048, 00:10:01.330 "data_size": 63488 00:10:01.330 }, 00:10:01.330 { 00:10:01.330 "name": null, 00:10:01.330 "uuid": "2b9601b0-5b52-0356-a80f-5a51e00370ef", 00:10:01.330 "is_configured": false, 00:10:01.330 "data_offset": 2048, 00:10:01.330 "data_size": 63488 00:10:01.330 }, 00:10:01.330 { 00:10:01.330 "name": null, 00:10:01.330 "uuid": "653246b3-8314-b35c-97ba-8781a588a613", 00:10:01.330 "is_configured": false, 00:10:01.330 "data_offset": 2048, 00:10:01.330 "data_size": 63488 00:10:01.330 }, 00:10:01.330 { 00:10:01.330 "name": null, 00:10:01.330 "uuid": "d04754b1-8964-f256-a86b-a5ba0141b004", 00:10:01.330 "is_configured": false, 00:10:01.330 "data_offset": 2048, 00:10:01.330 "data_size": 63488 00:10:01.330 } 00:10:01.330 ] 00:10:01.330 }' 00:10:01.330 06:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:01.330 06:02:09 -- common/autotest_common.sh@10 -- # set +x 00:10:01.590 06:02:09 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:01.590 06:02:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:01.590 06:02:09 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.590 [2024-05-13 06:02:09.834785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.590 [2024-05-13 06:02:09.834822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.590 [2024-05-13 06:02:09.834843] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c103780 00:10:01.590 [2024-05-13 06:02:09.834849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.590 [2024-05-13 06:02:09.834912] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.590 [2024-05-13 06:02:09.834919] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.590 [2024-05-13 06:02:09.834932] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:01.590 [2024-05-13 06:02:09.834938] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.590 pt2 00:10:01.590 06:02:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:01.590 06:02:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:01.590 06:02:09 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:01.850 [2024-05-13 06:02:10.006804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:01.850 [2024-05-13 06:02:10.006831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.850 [2024-05-13 06:02:10.006851] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c104b80 00:10:01.850 [2024-05-13 06:02:10.006857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.850 [2024-05-13 06:02:10.006906] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.850 [2024-05-13 06:02:10.006913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:01.850 [2024-05-13 06:02:10.006924] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:01.850 [2024-05-13 06:02:10.006929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:01.850 pt3 00:10:01.850 06:02:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:01.850 06:02:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:01.850 06:02:10 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:01.850 [2024-05-13 06:02:10.154819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:01.850 [2024-05-13 06:02:10.154849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.850 [2024-05-13 06:02:10.154863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82c104900 00:10:01.850 [2024-05-13 06:02:10.154868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.850 [2024-05-13 06:02:10.154931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.851 [2024-05-13 06:02:10.154938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:01.851 [2024-05-13 06:02:10.154950] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:10:01.851 [2024-05-13 06:02:10.154957] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:01.851 [2024-05-13 06:02:10.154978] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82c103c80 00:10:01.851 [2024-05-13 06:02:10.154982] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:01.851 [2024-05-13 06:02:10.155006] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82c166e20 00:10:01.851 [2024-05-13 06:02:10.155041] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82c103c80 00:10:01.851 [2024-05-13 06:02:10.155044] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82c103c80 00:10:01.851 [2024-05-13 06:02:10.155059] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.851 pt4 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:02.110 "name": "raid_bdev1", 00:10:02.110 "uuid": "5389507a-10ee-11ef-ba60-3508ead7bdda", 00:10:02.110 "strip_size_kb": 64, 00:10:02.110 "state": "online", 00:10:02.110 "raid_level": "concat", 00:10:02.110 "superblock": true, 00:10:02.110 "num_base_bdevs": 4, 00:10:02.110 "num_base_bdevs_discovered": 4, 00:10:02.110 "num_base_bdevs_operational": 4, 00:10:02.110 "base_bdevs_list": [ 00:10:02.110 { 00:10:02.110 "name": "pt1", 00:10:02.110 "uuid": "97991143-0d6e-1754-9005-87882f00ddf3", 00:10:02.110 "is_configured": true, 00:10:02.110 "data_offset": 2048, 00:10:02.110 "data_size": 63488 00:10:02.110 }, 00:10:02.110 { 00:10:02.110 "name": "pt2", 00:10:02.110 "uuid": "2b9601b0-5b52-0356-a80f-5a51e00370ef", 00:10:02.110 "is_configured": true, 00:10:02.110 "data_offset": 2048, 00:10:02.110 "data_size": 63488 00:10:02.110 }, 00:10:02.110 { 00:10:02.110 "name": "pt3", 00:10:02.110 "uuid": "653246b3-8314-b35c-97ba-8781a588a613", 00:10:02.110 "is_configured": true, 00:10:02.110 "data_offset": 2048, 00:10:02.110 "data_size": 63488 00:10:02.110 }, 00:10:02.110 { 00:10:02.110 "name": "pt4", 00:10:02.110 "uuid": "d04754b1-8964-f256-a86b-a5ba0141b004", 00:10:02.110 "is_configured": true, 00:10:02.110 "data_offset": 2048, 00:10:02.110 "data_size": 63488 00:10:02.110 } 00:10:02.110 ] 00:10:02.110 }' 00:10:02.110 06:02:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:02.110 06:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:02.370 06:02:10 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:02.370 06:02:10 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:02.629 [2024-05-13 06:02:10.758914] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.629 06:02:10 -- bdev/bdev_raid.sh@430 -- # '[' 5389507a-10ee-11ef-ba60-3508ead7bdda '!=' 5389507a-10ee-11ef-ba60-3508ead7bdda ']' 00:10:02.629 06:02:10 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:10:02.629 06:02:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:02.629 06:02:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:02.629 06:02:10 -- bdev/bdev_raid.sh@511 -- # killprocess 52707 00:10:02.629 06:02:10 -- common/autotest_common.sh@926 -- # '[' -z 52707 ']' 00:10:02.629 06:02:10 -- common/autotest_common.sh@930 -- # kill -0 52707 00:10:02.629 06:02:10 -- common/autotest_common.sh@931 -- # uname 00:10:02.629 06:02:10 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:02.629 06:02:10 -- common/autotest_common.sh@934 -- # ps -c -o command 52707 00:10:02.629 06:02:10 -- common/autotest_common.sh@934 -- # tail -1 00:10:02.629 06:02:10 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:10:02.629 06:02:10 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:10:02.629 killing process with pid 52707 00:10:02.629 06:02:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52707' 00:10:02.629 06:02:10 -- common/autotest_common.sh@945 -- # kill 52707 00:10:02.629 [2024-05-13 06:02:10.789005] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.629 [2024-05-13 06:02:10.789021] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.629 [2024-05-13 06:02:10.789044] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.629 [2024-05-13 06:02:10.789047] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82c103c80 name raid_bdev1, state offline 00:10:02.629 06:02:10 -- common/autotest_common.sh@950 -- # wait 52707 00:10:02.629 [2024-05-13 06:02:10.807615] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.629 06:02:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:02.629 00:10:02.629 real 0m7.178s 00:10:02.629 user 0m12.352s 00:10:02.629 sys 0m1.246s 00:10:02.629 06:02:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.629 06:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:02.629 ************************************ 00:10:02.629 END TEST raid_superblock_test 00:10:02.629 ************************************ 00:10:02.888 06:02:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:02.888 06:02:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:02.888 06:02:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:02.888 06:02:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.888 06:02:10 -- common/autotest_common.sh@10 -- # set +x 00:10:02.888 ************************************ 00:10:02.888 START TEST raid_state_function_test 00:10:02.888 ************************************ 00:10:02.888 06:02:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:10:02.888 06:02:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:10:02.888 06:02:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:10:02.888 06:02:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:02.888 06:02:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:02.888 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:02.888 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=52892 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52892' 00:10:02.889 Process raid pid: 52892 00:10:02.889 06:02:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52892 /var/tmp/spdk-raid.sock 00:10:02.889 06:02:11 -- common/autotest_common.sh@819 -- # '[' -z 52892 ']' 00:10:02.889 06:02:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.889 06:02:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:02.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.889 06:02:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.889 06:02:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:02.889 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:10:02.889 [2024-05-13 06:02:11.026638] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:02.889 [2024-05-13 06:02:11.026983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:03.148 EAL: TSC is not safe to use in SMP mode 00:10:03.148 EAL: TSC is not invariant 00:10:03.148 [2024-05-13 06:02:11.443990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.406 [2024-05-13 06:02:11.531146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.406 [2024-05-13 06:02:11.531648] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.406 [2024-05-13 06:02:11.531656] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.666 06:02:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:03.666 06:02:11 -- common/autotest_common.sh@852 -- # return 0 00:10:03.666 06:02:11 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:03.925 [2024-05-13 06:02:12.042728] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.925 [2024-05-13 06:02:12.042772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.925 [2024-05-13 06:02:12.042776] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.925 [2024-05-13 06:02:12.042782] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.925 [2024-05-13 06:02:12.042784] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.925 [2024-05-13 06:02:12.042790] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.925 [2024-05-13 06:02:12.042792] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.925 [2024-05-13 06:02:12.042814] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.925 06:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:03.926 "name": "Existed_Raid", 00:10:03.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.926 "strip_size_kb": 0, 00:10:03.926 "state": "configuring", 00:10:03.926 "raid_level": "raid1", 00:10:03.926 "superblock": false, 00:10:03.926 "num_base_bdevs": 4, 00:10:03.926 "num_base_bdevs_discovered": 0, 00:10:03.926 "num_base_bdevs_operational": 4, 00:10:03.926 "base_bdevs_list": [ 00:10:03.926 { 00:10:03.926 "name": "BaseBdev1", 00:10:03.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.926 "is_configured": false, 00:10:03.926 "data_offset": 0, 00:10:03.926 "data_size": 0 00:10:03.926 }, 00:10:03.926 { 00:10:03.926 "name": "BaseBdev2", 00:10:03.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.926 "is_configured": false, 00:10:03.926 "data_offset": 0, 00:10:03.926 "data_size": 0 00:10:03.926 }, 00:10:03.926 { 00:10:03.926 "name": "BaseBdev3", 00:10:03.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.926 "is_configured": false, 00:10:03.926 "data_offset": 0, 00:10:03.926 "data_size": 0 00:10:03.926 }, 00:10:03.926 { 00:10:03.926 "name": "BaseBdev4", 00:10:03.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.926 "is_configured": false, 00:10:03.926 "data_offset": 0, 00:10:03.926 "data_size": 0 00:10:03.926 } 00:10:03.926 ] 00:10:03.926 }' 00:10:03.926 06:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:03.926 06:02:12 -- common/autotest_common.sh@10 -- # set +x 00:10:04.202 06:02:12 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:04.462 [2024-05-13 06:02:12.650780] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.462 [2024-05-13 06:02:12.650797] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc0a500 name Existed_Raid, state configuring 00:10:04.462 06:02:12 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:04.722 [2024-05-13 06:02:12.818802] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.722 [2024-05-13 06:02:12.818832] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.722 [2024-05-13 06:02:12.818835] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.722 [2024-05-13 06:02:12.818840] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.722 [2024-05-13 06:02:12.818843] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.722 [2024-05-13 06:02:12.818848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.722 [2024-05-13 06:02:12.818868] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.722 [2024-05-13 06:02:12.818874] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.722 06:02:12 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.722 [2024-05-13 06:02:12.991577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.722 BaseBdev1 00:10:04.722 06:02:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:04.722 06:02:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:04.722 06:02:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:04.722 06:02:13 -- common/autotest_common.sh@889 -- # local i 00:10:04.722 06:02:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:04.722 06:02:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:04.722 06:02:13 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:04.981 06:02:13 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:05.240 [ 00:10:05.240 { 00:10:05.240 "name": "BaseBdev1", 00:10:05.240 "aliases": [ 00:10:05.240 "579c7a52-10ee-11ef-ba60-3508ead7bdda" 00:10:05.240 ], 00:10:05.240 "product_name": "Malloc disk", 00:10:05.240 "block_size": 512, 00:10:05.240 "num_blocks": 65536, 00:10:05.240 "uuid": "579c7a52-10ee-11ef-ba60-3508ead7bdda", 00:10:05.240 "assigned_rate_limits": { 00:10:05.240 "rw_ios_per_sec": 0, 00:10:05.240 "rw_mbytes_per_sec": 0, 00:10:05.240 "r_mbytes_per_sec": 0, 00:10:05.240 "w_mbytes_per_sec": 0 00:10:05.240 }, 00:10:05.240 "claimed": true, 00:10:05.240 "claim_type": "exclusive_write", 00:10:05.240 "zoned": false, 00:10:05.240 "supported_io_types": { 00:10:05.240 "read": true, 00:10:05.240 "write": true, 00:10:05.240 "unmap": true, 00:10:05.240 "write_zeroes": true, 00:10:05.240 "flush": true, 00:10:05.240 "reset": true, 00:10:05.240 "compare": false, 00:10:05.240 "compare_and_write": false, 00:10:05.240 "abort": true, 00:10:05.240 "nvme_admin": false, 00:10:05.240 "nvme_io": false 00:10:05.240 }, 00:10:05.240 "memory_domains": [ 00:10:05.240 { 00:10:05.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.240 "dma_device_type": 2 00:10:05.240 } 00:10:05.240 ], 00:10:05.240 "driver_specific": {} 00:10:05.240 } 00:10:05.240 ] 00:10:05.240 06:02:13 -- common/autotest_common.sh@895 -- # return 0 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:05.240 "name": "Existed_Raid", 00:10:05.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.240 "strip_size_kb": 0, 00:10:05.240 "state": "configuring", 00:10:05.240 "raid_level": "raid1", 00:10:05.240 "superblock": false, 00:10:05.240 "num_base_bdevs": 4, 00:10:05.240 "num_base_bdevs_discovered": 1, 00:10:05.240 "num_base_bdevs_operational": 4, 00:10:05.240 "base_bdevs_list": [ 00:10:05.240 { 00:10:05.240 "name": "BaseBdev1", 00:10:05.240 "uuid": "579c7a52-10ee-11ef-ba60-3508ead7bdda", 00:10:05.240 "is_configured": true, 00:10:05.240 "data_offset": 0, 00:10:05.240 "data_size": 65536 00:10:05.240 }, 00:10:05.240 { 00:10:05.240 "name": "BaseBdev2", 00:10:05.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.240 "is_configured": false, 00:10:05.240 "data_offset": 0, 00:10:05.240 "data_size": 0 00:10:05.240 }, 00:10:05.240 { 00:10:05.240 "name": "BaseBdev3", 00:10:05.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.240 "is_configured": false, 00:10:05.240 "data_offset": 0, 00:10:05.240 "data_size": 0 00:10:05.240 }, 00:10:05.240 { 00:10:05.240 "name": "BaseBdev4", 00:10:05.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.240 "is_configured": false, 00:10:05.240 "data_offset": 0, 00:10:05.240 "data_size": 0 00:10:05.240 } 00:10:05.240 ] 00:10:05.240 }' 00:10:05.240 06:02:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:05.240 06:02:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.501 06:02:13 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:05.763 [2024-05-13 06:02:13.946920] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.763 [2024-05-13 06:02:13.946940] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc0a500 name Existed_Raid, state configuring 00:10:05.763 06:02:13 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:05.763 06:02:13 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:06.022 [2024-05-13 06:02:14.122948] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.022 [2024-05-13 06:02:14.123574] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.022 [2024-05-13 06:02:14.123618] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.022 [2024-05-13 06:02:14.123622] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.022 [2024-05-13 06:02:14.123628] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.022 [2024-05-13 06:02:14.123631] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:06.022 [2024-05-13 06:02:14.123636] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:06.022 "name": "Existed_Raid", 00:10:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.022 "strip_size_kb": 0, 00:10:06.022 "state": "configuring", 00:10:06.022 "raid_level": "raid1", 00:10:06.022 "superblock": false, 00:10:06.022 "num_base_bdevs": 4, 00:10:06.022 "num_base_bdevs_discovered": 1, 00:10:06.022 "num_base_bdevs_operational": 4, 00:10:06.022 "base_bdevs_list": [ 00:10:06.022 { 00:10:06.022 "name": "BaseBdev1", 00:10:06.022 "uuid": "579c7a52-10ee-11ef-ba60-3508ead7bdda", 00:10:06.022 "is_configured": true, 00:10:06.022 "data_offset": 0, 00:10:06.022 "data_size": 65536 00:10:06.022 }, 00:10:06.022 { 00:10:06.022 "name": "BaseBdev2", 00:10:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.022 "is_configured": false, 00:10:06.022 "data_offset": 0, 00:10:06.022 "data_size": 0 00:10:06.022 }, 00:10:06.022 { 00:10:06.022 "name": "BaseBdev3", 00:10:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.022 "is_configured": false, 00:10:06.022 "data_offset": 0, 00:10:06.022 "data_size": 0 00:10:06.022 }, 00:10:06.022 { 00:10:06.022 "name": "BaseBdev4", 00:10:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.022 "is_configured": false, 00:10:06.022 "data_offset": 0, 00:10:06.022 "data_size": 0 00:10:06.022 } 00:10:06.022 ] 00:10:06.022 }' 00:10:06.022 06:02:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:06.022 06:02:14 -- common/autotest_common.sh@10 -- # set +x 00:10:06.281 06:02:14 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.541 [2024-05-13 06:02:14.703107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.541 BaseBdev2 00:10:06.541 06:02:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:06.541 06:02:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:06.541 06:02:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:06.541 06:02:14 -- common/autotest_common.sh@889 -- # local i 00:10:06.541 06:02:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:06.541 06:02:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:06.541 06:02:14 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:06.800 06:02:14 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.800 [ 00:10:06.800 { 00:10:06.800 "name": "BaseBdev2", 00:10:06.800 "aliases": [ 00:10:06.800 "58a1bc94-10ee-11ef-ba60-3508ead7bdda" 00:10:06.800 ], 00:10:06.800 "product_name": "Malloc disk", 00:10:06.800 "block_size": 512, 00:10:06.800 "num_blocks": 65536, 00:10:06.800 "uuid": "58a1bc94-10ee-11ef-ba60-3508ead7bdda", 00:10:06.800 "assigned_rate_limits": { 00:10:06.800 "rw_ios_per_sec": 0, 00:10:06.800 "rw_mbytes_per_sec": 0, 00:10:06.800 "r_mbytes_per_sec": 0, 00:10:06.800 "w_mbytes_per_sec": 0 00:10:06.800 }, 00:10:06.800 "claimed": true, 00:10:06.800 "claim_type": "exclusive_write", 00:10:06.800 "zoned": false, 00:10:06.800 "supported_io_types": { 00:10:06.800 "read": true, 00:10:06.800 "write": true, 00:10:06.800 "unmap": true, 00:10:06.800 "write_zeroes": true, 00:10:06.800 "flush": true, 00:10:06.800 "reset": true, 00:10:06.800 "compare": false, 00:10:06.800 "compare_and_write": false, 00:10:06.800 "abort": true, 00:10:06.800 "nvme_admin": false, 00:10:06.800 "nvme_io": false 00:10:06.800 }, 00:10:06.800 "memory_domains": [ 00:10:06.800 { 00:10:06.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.800 "dma_device_type": 2 00:10:06.800 } 00:10:06.800 ], 00:10:06.800 "driver_specific": {} 00:10:06.800 } 00:10:06.800 ] 00:10:06.800 06:02:15 -- common/autotest_common.sh@895 -- # return 0 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:06.800 06:02:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:06.801 06:02:15 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.801 06:02:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.060 06:02:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:07.060 "name": "Existed_Raid", 00:10:07.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.060 "strip_size_kb": 0, 00:10:07.060 "state": "configuring", 00:10:07.060 "raid_level": "raid1", 00:10:07.060 "superblock": false, 00:10:07.060 "num_base_bdevs": 4, 00:10:07.060 "num_base_bdevs_discovered": 2, 00:10:07.060 "num_base_bdevs_operational": 4, 00:10:07.060 "base_bdevs_list": [ 00:10:07.060 { 00:10:07.060 "name": "BaseBdev1", 00:10:07.060 "uuid": "579c7a52-10ee-11ef-ba60-3508ead7bdda", 00:10:07.060 "is_configured": true, 00:10:07.060 "data_offset": 0, 00:10:07.060 "data_size": 65536 00:10:07.060 }, 00:10:07.060 { 00:10:07.060 "name": "BaseBdev2", 00:10:07.060 "uuid": "58a1bc94-10ee-11ef-ba60-3508ead7bdda", 00:10:07.060 "is_configured": true, 00:10:07.060 "data_offset": 0, 00:10:07.060 "data_size": 65536 00:10:07.060 }, 00:10:07.060 { 00:10:07.060 "name": "BaseBdev3", 00:10:07.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.060 "is_configured": false, 00:10:07.060 "data_offset": 0, 00:10:07.060 "data_size": 0 00:10:07.060 }, 00:10:07.060 { 00:10:07.060 "name": "BaseBdev4", 00:10:07.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.060 "is_configured": false, 00:10:07.060 "data_offset": 0, 00:10:07.060 "data_size": 0 00:10:07.060 } 00:10:07.060 ] 00:10:07.060 }' 00:10:07.060 06:02:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:07.060 06:02:15 -- common/autotest_common.sh@10 -- # set +x 00:10:07.320 06:02:15 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.579 [2024-05-13 06:02:15.659189] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.579 BaseBdev3 00:10:07.579 06:02:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:07.579 06:02:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:10:07.579 06:02:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:07.579 06:02:15 -- common/autotest_common.sh@889 -- # local i 00:10:07.579 06:02:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:07.579 06:02:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:07.579 06:02:15 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:07.579 06:02:15 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.838 [ 00:10:07.838 { 00:10:07.838 "name": "BaseBdev3", 00:10:07.838 "aliases": [ 00:10:07.838 "5933a084-10ee-11ef-ba60-3508ead7bdda" 00:10:07.838 ], 00:10:07.838 "product_name": "Malloc disk", 00:10:07.838 "block_size": 512, 00:10:07.838 "num_blocks": 65536, 00:10:07.838 "uuid": "5933a084-10ee-11ef-ba60-3508ead7bdda", 00:10:07.838 "assigned_rate_limits": { 00:10:07.838 "rw_ios_per_sec": 0, 00:10:07.838 "rw_mbytes_per_sec": 0, 00:10:07.838 "r_mbytes_per_sec": 0, 00:10:07.838 "w_mbytes_per_sec": 0 00:10:07.838 }, 00:10:07.838 "claimed": true, 00:10:07.838 "claim_type": "exclusive_write", 00:10:07.838 "zoned": false, 00:10:07.838 "supported_io_types": { 00:10:07.838 "read": true, 00:10:07.838 "write": true, 00:10:07.838 "unmap": true, 00:10:07.838 "write_zeroes": true, 00:10:07.838 "flush": true, 00:10:07.838 "reset": true, 00:10:07.838 "compare": false, 00:10:07.838 "compare_and_write": false, 00:10:07.838 "abort": true, 00:10:07.838 "nvme_admin": false, 00:10:07.838 "nvme_io": false 00:10:07.838 }, 00:10:07.838 "memory_domains": [ 00:10:07.838 { 00:10:07.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.838 "dma_device_type": 2 00:10:07.838 } 00:10:07.838 ], 00:10:07.838 "driver_specific": {} 00:10:07.838 } 00:10:07.838 ] 00:10:07.838 06:02:16 -- common/autotest_common.sh@895 -- # return 0 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:07.838 06:02:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.839 06:02:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.098 06:02:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:08.098 "name": "Existed_Raid", 00:10:08.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.098 "strip_size_kb": 0, 00:10:08.098 "state": "configuring", 00:10:08.098 "raid_level": "raid1", 00:10:08.098 "superblock": false, 00:10:08.098 "num_base_bdevs": 4, 00:10:08.098 "num_base_bdevs_discovered": 3, 00:10:08.098 "num_base_bdevs_operational": 4, 00:10:08.098 "base_bdevs_list": [ 00:10:08.098 { 00:10:08.098 "name": "BaseBdev1", 00:10:08.098 "uuid": "579c7a52-10ee-11ef-ba60-3508ead7bdda", 00:10:08.098 "is_configured": true, 00:10:08.098 "data_offset": 0, 00:10:08.098 "data_size": 65536 00:10:08.098 }, 00:10:08.098 { 00:10:08.098 "name": "BaseBdev2", 00:10:08.098 "uuid": "58a1bc94-10ee-11ef-ba60-3508ead7bdda", 00:10:08.098 "is_configured": true, 00:10:08.098 "data_offset": 0, 00:10:08.098 "data_size": 65536 00:10:08.098 }, 00:10:08.098 { 00:10:08.098 "name": "BaseBdev3", 00:10:08.098 "uuid": "5933a084-10ee-11ef-ba60-3508ead7bdda", 00:10:08.098 "is_configured": true, 00:10:08.098 "data_offset": 0, 00:10:08.098 "data_size": 65536 00:10:08.098 }, 00:10:08.098 { 00:10:08.098 "name": "BaseBdev4", 00:10:08.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.098 "is_configured": false, 00:10:08.098 "data_offset": 0, 00:10:08.098 "data_size": 0 00:10:08.098 } 00:10:08.098 ] 00:10:08.098 }' 00:10:08.098 06:02:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:08.098 06:02:16 -- common/autotest_common.sh@10 -- # set +x 00:10:08.358 06:02:16 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:10:08.358 [2024-05-13 06:02:16.595328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:08.358 [2024-05-13 06:02:16.595349] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82bc0aa00 00:10:08.358 [2024-05-13 06:02:16.595352] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:08.358 [2024-05-13 06:02:16.595390] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82bc6dec0 00:10:08.358 [2024-05-13 06:02:16.595466] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82bc0aa00 00:10:08.358 [2024-05-13 06:02:16.595469] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82bc0aa00 00:10:08.358 [2024-05-13 06:02:16.595491] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.358 BaseBdev4 00:10:08.358 06:02:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:10:08.358 06:02:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:10:08.358 06:02:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:08.358 06:02:16 -- common/autotest_common.sh@889 -- # local i 00:10:08.358 06:02:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:08.358 06:02:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:08.358 06:02:16 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:08.617 06:02:16 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:08.617 [ 00:10:08.617 { 00:10:08.617 "name": "BaseBdev4", 00:10:08.617 "aliases": [ 00:10:08.617 "59c278dd-10ee-11ef-ba60-3508ead7bdda" 00:10:08.617 ], 00:10:08.617 "product_name": "Malloc disk", 00:10:08.617 "block_size": 512, 00:10:08.617 "num_blocks": 65536, 00:10:08.617 "uuid": "59c278dd-10ee-11ef-ba60-3508ead7bdda", 00:10:08.617 "assigned_rate_limits": { 00:10:08.617 "rw_ios_per_sec": 0, 00:10:08.617 "rw_mbytes_per_sec": 0, 00:10:08.617 "r_mbytes_per_sec": 0, 00:10:08.617 "w_mbytes_per_sec": 0 00:10:08.617 }, 00:10:08.617 "claimed": true, 00:10:08.617 "claim_type": "exclusive_write", 00:10:08.617 "zoned": false, 00:10:08.617 "supported_io_types": { 00:10:08.617 "read": true, 00:10:08.617 "write": true, 00:10:08.617 "unmap": true, 00:10:08.617 "write_zeroes": true, 00:10:08.617 "flush": true, 00:10:08.617 "reset": true, 00:10:08.617 "compare": false, 00:10:08.617 "compare_and_write": false, 00:10:08.617 "abort": true, 00:10:08.617 "nvme_admin": false, 00:10:08.617 "nvme_io": false 00:10:08.617 }, 00:10:08.617 "memory_domains": [ 00:10:08.617 { 00:10:08.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.617 "dma_device_type": 2 00:10:08.617 } 00:10:08.617 ], 00:10:08.617 "driver_specific": {} 00:10:08.617 } 00:10:08.617 ] 00:10:08.617 06:02:16 -- common/autotest_common.sh@895 -- # return 0 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.617 06:02:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.877 06:02:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:08.877 "name": "Existed_Raid", 00:10:08.877 "uuid": "59c27c1a-10ee-11ef-ba60-3508ead7bdda", 00:10:08.877 "strip_size_kb": 0, 00:10:08.877 "state": "online", 00:10:08.877 "raid_level": "raid1", 00:10:08.877 "superblock": false, 00:10:08.877 "num_base_bdevs": 4, 00:10:08.877 "num_base_bdevs_discovered": 4, 00:10:08.877 "num_base_bdevs_operational": 4, 00:10:08.877 "base_bdevs_list": [ 00:10:08.877 { 00:10:08.877 "name": "BaseBdev1", 00:10:08.877 "uuid": "579c7a52-10ee-11ef-ba60-3508ead7bdda", 00:10:08.877 "is_configured": true, 00:10:08.877 "data_offset": 0, 00:10:08.877 "data_size": 65536 00:10:08.877 }, 00:10:08.877 { 00:10:08.877 "name": "BaseBdev2", 00:10:08.877 "uuid": "58a1bc94-10ee-11ef-ba60-3508ead7bdda", 00:10:08.877 "is_configured": true, 00:10:08.877 "data_offset": 0, 00:10:08.877 "data_size": 65536 00:10:08.877 }, 00:10:08.877 { 00:10:08.877 "name": "BaseBdev3", 00:10:08.877 "uuid": "5933a084-10ee-11ef-ba60-3508ead7bdda", 00:10:08.877 "is_configured": true, 00:10:08.877 "data_offset": 0, 00:10:08.877 "data_size": 65536 00:10:08.877 }, 00:10:08.877 { 00:10:08.877 "name": "BaseBdev4", 00:10:08.877 "uuid": "59c278dd-10ee-11ef-ba60-3508ead7bdda", 00:10:08.877 "is_configured": true, 00:10:08.877 "data_offset": 0, 00:10:08.877 "data_size": 65536 00:10:08.877 } 00:10:08.877 ] 00:10:08.877 }' 00:10:08.877 06:02:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:08.877 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:10:09.136 06:02:17 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:09.396 [2024-05-13 06:02:17.519442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.396 06:02:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.655 06:02:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:09.655 "name": "Existed_Raid", 00:10:09.655 "uuid": "59c27c1a-10ee-11ef-ba60-3508ead7bdda", 00:10:09.655 "strip_size_kb": 0, 00:10:09.655 "state": "online", 00:10:09.655 "raid_level": "raid1", 00:10:09.655 "superblock": false, 00:10:09.655 "num_base_bdevs": 4, 00:10:09.655 "num_base_bdevs_discovered": 3, 00:10:09.655 "num_base_bdevs_operational": 3, 00:10:09.655 "base_bdevs_list": [ 00:10:09.655 { 00:10:09.655 "name": null, 00:10:09.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.655 "is_configured": false, 00:10:09.655 "data_offset": 0, 00:10:09.655 "data_size": 65536 00:10:09.655 }, 00:10:09.655 { 00:10:09.655 "name": "BaseBdev2", 00:10:09.655 "uuid": "58a1bc94-10ee-11ef-ba60-3508ead7bdda", 00:10:09.655 "is_configured": true, 00:10:09.655 "data_offset": 0, 00:10:09.655 "data_size": 65536 00:10:09.655 }, 00:10:09.655 { 00:10:09.655 "name": "BaseBdev3", 00:10:09.655 "uuid": "5933a084-10ee-11ef-ba60-3508ead7bdda", 00:10:09.655 "is_configured": true, 00:10:09.655 "data_offset": 0, 00:10:09.655 "data_size": 65536 00:10:09.655 }, 00:10:09.655 { 00:10:09.655 "name": "BaseBdev4", 00:10:09.655 "uuid": "59c278dd-10ee-11ef-ba60-3508ead7bdda", 00:10:09.655 "is_configured": true, 00:10:09.655 "data_offset": 0, 00:10:09.655 "data_size": 65536 00:10:09.655 } 00:10:09.655 ] 00:10:09.655 }' 00:10:09.655 06:02:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:09.655 06:02:17 -- common/autotest_common.sh@10 -- # set +x 00:10:09.914 06:02:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:09.914 06:02:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:09.914 06:02:17 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:09.914 06:02:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:09.914 06:02:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:09.914 06:02:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.914 06:02:18 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:10.174 [2024-05-13 06:02:18.308221] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.174 06:02:18 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:10.433 [2024-05-13 06:02:18.636910] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.433 06:02:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:10.433 06:02:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:10.433 06:02:18 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.433 06:02:18 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:10.692 06:02:18 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:10.692 06:02:18 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.692 06:02:18 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:10:10.692 [2024-05-13 06:02:18.989620] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:10.692 [2024-05-13 06:02:18.989636] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.692 [2024-05-13 06:02:18.989644] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.692 [2024-05-13 06:02:18.994316] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.692 [2024-05-13 06:02:18.994334] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82bc0aa00 name Existed_Raid, state offline 00:10:10.692 06:02:19 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:10.692 06:02:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:10.692 06:02:19 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.692 06:02:19 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:10.951 06:02:19 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:10.951 06:02:19 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:10.951 06:02:19 -- bdev/bdev_raid.sh@287 -- # killprocess 52892 00:10:10.951 06:02:19 -- common/autotest_common.sh@926 -- # '[' -z 52892 ']' 00:10:10.951 06:02:19 -- common/autotest_common.sh@930 -- # kill -0 52892 00:10:10.951 06:02:19 -- common/autotest_common.sh@931 -- # uname 00:10:10.951 06:02:19 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:10.951 06:02:19 -- common/autotest_common.sh@934 -- # ps -c -o command 52892 00:10:10.951 06:02:19 -- common/autotest_common.sh@934 -- # tail -1 00:10:10.951 06:02:19 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:10:10.951 06:02:19 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:10:10.951 killing process with pid 52892 00:10:10.952 06:02:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52892' 00:10:10.952 06:02:19 -- common/autotest_common.sh@945 -- # kill 52892 00:10:10.952 [2024-05-13 06:02:19.168531] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.952 [2024-05-13 06:02:19.168563] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.952 06:02:19 -- common/autotest_common.sh@950 -- # wait 52892 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:11.211 00:10:11.211 real 0m8.300s 00:10:11.211 user 0m14.336s 00:10:11.211 sys 0m1.610s 00:10:11.211 06:02:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.211 06:02:19 -- common/autotest_common.sh@10 -- # set +x 00:10:11.211 ************************************ 00:10:11.211 END TEST raid_state_function_test 00:10:11.211 ************************************ 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:11.211 06:02:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:11.211 06:02:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.211 06:02:19 -- common/autotest_common.sh@10 -- # set +x 00:10:11.211 ************************************ 00:10:11.211 START TEST raid_state_function_test_sb 00:10:11.211 ************************************ 00:10:11.211 06:02:19 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=53162 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53162' 00:10:11.211 Process raid pid: 53162 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@225 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:11.211 06:02:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53162 /var/tmp/spdk-raid.sock 00:10:11.211 06:02:19 -- common/autotest_common.sh@819 -- # '[' -z 53162 ']' 00:10:11.211 06:02:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:11.211 06:02:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:11.211 06:02:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:11.211 06:02:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.211 06:02:19 -- common/autotest_common.sh@10 -- # set +x 00:10:11.211 [2024-05-13 06:02:19.384898] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:11.211 [2024-05-13 06:02:19.385119] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:11.780 EAL: TSC is not safe to use in SMP mode 00:10:11.781 EAL: TSC is not invariant 00:10:11.781 [2024-05-13 06:02:19.801314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.781 [2024-05-13 06:02:19.888276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.781 [2024-05-13 06:02:19.888687] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.781 [2024-05-13 06:02:19.888697] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.040 06:02:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:12.040 06:02:20 -- common/autotest_common.sh@852 -- # return 0 00:10:12.040 06:02:20 -- bdev/bdev_raid.sh@232 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:12.299 [2024-05-13 06:02:20.419788] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.299 [2024-05-13 06:02:20.419835] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.299 [2024-05-13 06:02:20.419839] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.299 [2024-05-13 06:02:20.419845] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.299 [2024-05-13 06:02:20.419848] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.299 [2024-05-13 06:02:20.419853] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.299 [2024-05-13 06:02:20.419872] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.299 [2024-05-13 06:02:20.419878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:12.299 06:02:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:12.300 06:02:20 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.300 06:02:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.300 06:02:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:12.300 "name": "Existed_Raid", 00:10:12.300 "uuid": "5c0a0bca-10ee-11ef-ba60-3508ead7bdda", 00:10:12.300 "strip_size_kb": 0, 00:10:12.300 "state": "configuring", 00:10:12.300 "raid_level": "raid1", 00:10:12.300 "superblock": true, 00:10:12.300 "num_base_bdevs": 4, 00:10:12.300 "num_base_bdevs_discovered": 0, 00:10:12.300 "num_base_bdevs_operational": 4, 00:10:12.300 "base_bdevs_list": [ 00:10:12.300 { 00:10:12.300 "name": "BaseBdev1", 00:10:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.300 "is_configured": false, 00:10:12.300 "data_offset": 0, 00:10:12.300 "data_size": 0 00:10:12.300 }, 00:10:12.300 { 00:10:12.300 "name": "BaseBdev2", 00:10:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.300 "is_configured": false, 00:10:12.300 "data_offset": 0, 00:10:12.300 "data_size": 0 00:10:12.300 }, 00:10:12.300 { 00:10:12.300 "name": "BaseBdev3", 00:10:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.300 "is_configured": false, 00:10:12.300 "data_offset": 0, 00:10:12.300 "data_size": 0 00:10:12.300 }, 00:10:12.300 { 00:10:12.300 "name": "BaseBdev4", 00:10:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.300 "is_configured": false, 00:10:12.300 "data_offset": 0, 00:10:12.300 "data_size": 0 00:10:12.300 } 00:10:12.300 ] 00:10:12.300 }' 00:10:12.300 06:02:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:12.300 06:02:20 -- common/autotest_common.sh@10 -- # set +x 00:10:12.559 06:02:20 -- bdev/bdev_raid.sh@234 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:12.818 [2024-05-13 06:02:21.031853] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:12.818 [2024-05-13 06:02:21.031872] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b417500 name Existed_Raid, state configuring 00:10:12.818 06:02:21 -- bdev/bdev_raid.sh@238 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:13.078 [2024-05-13 06:02:21.203886] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.078 [2024-05-13 06:02:21.203921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.078 [2024-05-13 06:02:21.203925] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.078 [2024-05-13 06:02:21.203930] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.078 [2024-05-13 06:02:21.203932] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.078 [2024-05-13 06:02:21.203938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.078 [2024-05-13 06:02:21.203956] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.078 [2024-05-13 06:02:21.203962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.078 06:02:21 -- bdev/bdev_raid.sh@239 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.078 [2024-05-13 06:02:21.376650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.078 BaseBdev1 00:10:13.078 06:02:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:13.078 06:02:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:13.078 06:02:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:13.078 06:02:21 -- common/autotest_common.sh@889 -- # local i 00:10:13.078 06:02:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:13.078 06:02:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:13.078 06:02:21 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:13.337 06:02:21 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.597 [ 00:10:13.597 { 00:10:13.597 "name": "BaseBdev1", 00:10:13.597 "aliases": [ 00:10:13.597 "5c9bf094-10ee-11ef-ba60-3508ead7bdda" 00:10:13.597 ], 00:10:13.597 "product_name": "Malloc disk", 00:10:13.597 "block_size": 512, 00:10:13.597 "num_blocks": 65536, 00:10:13.597 "uuid": "5c9bf094-10ee-11ef-ba60-3508ead7bdda", 00:10:13.597 "assigned_rate_limits": { 00:10:13.597 "rw_ios_per_sec": 0, 00:10:13.597 "rw_mbytes_per_sec": 0, 00:10:13.597 "r_mbytes_per_sec": 0, 00:10:13.597 "w_mbytes_per_sec": 0 00:10:13.597 }, 00:10:13.597 "claimed": true, 00:10:13.597 "claim_type": "exclusive_write", 00:10:13.597 "zoned": false, 00:10:13.597 "supported_io_types": { 00:10:13.597 "read": true, 00:10:13.597 "write": true, 00:10:13.597 "unmap": true, 00:10:13.597 "write_zeroes": true, 00:10:13.597 "flush": true, 00:10:13.597 "reset": true, 00:10:13.597 "compare": false, 00:10:13.597 "compare_and_write": false, 00:10:13.597 "abort": true, 00:10:13.597 "nvme_admin": false, 00:10:13.597 "nvme_io": false 00:10:13.597 }, 00:10:13.597 "memory_domains": [ 00:10:13.597 { 00:10:13.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.597 "dma_device_type": 2 00:10:13.597 } 00:10:13.597 ], 00:10:13.597 "driver_specific": {} 00:10:13.597 } 00:10:13.597 ] 00:10:13.597 06:02:21 -- common/autotest_common.sh@895 -- # return 0 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:13.597 "name": "Existed_Raid", 00:10:13.597 "uuid": "5c81b0ba-10ee-11ef-ba60-3508ead7bdda", 00:10:13.597 "strip_size_kb": 0, 00:10:13.597 "state": "configuring", 00:10:13.597 "raid_level": "raid1", 00:10:13.597 "superblock": true, 00:10:13.597 "num_base_bdevs": 4, 00:10:13.597 "num_base_bdevs_discovered": 1, 00:10:13.597 "num_base_bdevs_operational": 4, 00:10:13.597 "base_bdevs_list": [ 00:10:13.597 { 00:10:13.597 "name": "BaseBdev1", 00:10:13.597 "uuid": "5c9bf094-10ee-11ef-ba60-3508ead7bdda", 00:10:13.597 "is_configured": true, 00:10:13.597 "data_offset": 2048, 00:10:13.597 "data_size": 63488 00:10:13.597 }, 00:10:13.597 { 00:10:13.597 "name": "BaseBdev2", 00:10:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.597 "is_configured": false, 00:10:13.597 "data_offset": 0, 00:10:13.597 "data_size": 0 00:10:13.597 }, 00:10:13.597 { 00:10:13.597 "name": "BaseBdev3", 00:10:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.597 "is_configured": false, 00:10:13.597 "data_offset": 0, 00:10:13.597 "data_size": 0 00:10:13.597 }, 00:10:13.597 { 00:10:13.597 "name": "BaseBdev4", 00:10:13.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.597 "is_configured": false, 00:10:13.597 "data_offset": 0, 00:10:13.597 "data_size": 0 00:10:13.597 } 00:10:13.597 ] 00:10:13.597 }' 00:10:13.597 06:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:13.597 06:02:21 -- common/autotest_common.sh@10 -- # set +x 00:10:13.857 06:02:22 -- bdev/bdev_raid.sh@242 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:14.116 [2024-05-13 06:02:22.300045] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.116 [2024-05-13 06:02:22.300065] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b417500 name Existed_Raid, state configuring 00:10:14.116 06:02:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:10:14.116 06:02:22 -- bdev/bdev_raid.sh@246 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:14.375 06:02:22 -- bdev/bdev_raid.sh@247 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.375 BaseBdev1 00:10:14.375 06:02:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:10:14.375 06:02:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:14.375 06:02:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:14.375 06:02:22 -- common/autotest_common.sh@889 -- # local i 00:10:14.375 06:02:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:14.375 06:02:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:14.375 06:02:22 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:14.634 06:02:22 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.894 [ 00:10:14.894 { 00:10:14.894 "name": "BaseBdev1", 00:10:14.894 "aliases": [ 00:10:14.894 "5d59df49-10ee-11ef-ba60-3508ead7bdda" 00:10:14.894 ], 00:10:14.894 "product_name": "Malloc disk", 00:10:14.894 "block_size": 512, 00:10:14.894 "num_blocks": 65536, 00:10:14.894 "uuid": "5d59df49-10ee-11ef-ba60-3508ead7bdda", 00:10:14.894 "assigned_rate_limits": { 00:10:14.894 "rw_ios_per_sec": 0, 00:10:14.895 "rw_mbytes_per_sec": 0, 00:10:14.895 "r_mbytes_per_sec": 0, 00:10:14.895 "w_mbytes_per_sec": 0 00:10:14.895 }, 00:10:14.895 "claimed": false, 00:10:14.895 "zoned": false, 00:10:14.895 "supported_io_types": { 00:10:14.895 "read": true, 00:10:14.895 "write": true, 00:10:14.895 "unmap": true, 00:10:14.895 "write_zeroes": true, 00:10:14.895 "flush": true, 00:10:14.895 "reset": true, 00:10:14.895 "compare": false, 00:10:14.895 "compare_and_write": false, 00:10:14.895 "abort": true, 00:10:14.895 "nvme_admin": false, 00:10:14.895 "nvme_io": false 00:10:14.895 }, 00:10:14.895 "memory_domains": [ 00:10:14.895 { 00:10:14.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.895 "dma_device_type": 2 00:10:14.895 } 00:10:14.895 ], 00:10:14.895 "driver_specific": {} 00:10:14.895 } 00:10:14.895 ] 00:10:14.895 06:02:22 -- common/autotest_common.sh@895 -- # return 0 00:10:14.895 06:02:22 -- bdev/bdev_raid.sh@253 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:10:14.895 [2024-05-13 06:02:23.132737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.895 [2024-05-13 06:02:23.133140] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.895 [2024-05-13 06:02:23.133187] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.895 [2024-05-13 06:02:23.133191] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.895 [2024-05-13 06:02:23.133197] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.895 [2024-05-13 06:02:23.133200] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.895 [2024-05-13 06:02:23.133205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.895 06:02:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.154 06:02:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:15.154 "name": "Existed_Raid", 00:10:15.154 "uuid": "5da8026f-10ee-11ef-ba60-3508ead7bdda", 00:10:15.154 "strip_size_kb": 0, 00:10:15.155 "state": "configuring", 00:10:15.155 "raid_level": "raid1", 00:10:15.155 "superblock": true, 00:10:15.155 "num_base_bdevs": 4, 00:10:15.155 "num_base_bdevs_discovered": 1, 00:10:15.155 "num_base_bdevs_operational": 4, 00:10:15.155 "base_bdevs_list": [ 00:10:15.155 { 00:10:15.155 "name": "BaseBdev1", 00:10:15.155 "uuid": "5d59df49-10ee-11ef-ba60-3508ead7bdda", 00:10:15.155 "is_configured": true, 00:10:15.155 "data_offset": 2048, 00:10:15.155 "data_size": 63488 00:10:15.155 }, 00:10:15.155 { 00:10:15.155 "name": "BaseBdev2", 00:10:15.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.155 "is_configured": false, 00:10:15.155 "data_offset": 0, 00:10:15.155 "data_size": 0 00:10:15.155 }, 00:10:15.155 { 00:10:15.155 "name": "BaseBdev3", 00:10:15.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.155 "is_configured": false, 00:10:15.155 "data_offset": 0, 00:10:15.155 "data_size": 0 00:10:15.155 }, 00:10:15.155 { 00:10:15.155 "name": "BaseBdev4", 00:10:15.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.155 "is_configured": false, 00:10:15.155 "data_offset": 0, 00:10:15.155 "data_size": 0 00:10:15.155 } 00:10:15.155 ] 00:10:15.155 }' 00:10:15.155 06:02:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:15.155 06:02:23 -- common/autotest_common.sh@10 -- # set +x 00:10:15.414 06:02:23 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.414 [2024-05-13 06:02:23.728908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.673 BaseBdev2 00:10:15.673 06:02:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:15.673 06:02:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:15.673 06:02:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:15.673 06:02:23 -- common/autotest_common.sh@889 -- # local i 00:10:15.673 06:02:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:15.673 06:02:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:15.673 06:02:23 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:15.673 06:02:23 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.932 [ 00:10:15.932 { 00:10:15.932 "name": "BaseBdev2", 00:10:15.932 "aliases": [ 00:10:15.932 "5e02f724-10ee-11ef-ba60-3508ead7bdda" 00:10:15.932 ], 00:10:15.932 "product_name": "Malloc disk", 00:10:15.932 "block_size": 512, 00:10:15.932 "num_blocks": 65536, 00:10:15.932 "uuid": "5e02f724-10ee-11ef-ba60-3508ead7bdda", 00:10:15.932 "assigned_rate_limits": { 00:10:15.932 "rw_ios_per_sec": 0, 00:10:15.932 "rw_mbytes_per_sec": 0, 00:10:15.932 "r_mbytes_per_sec": 0, 00:10:15.932 "w_mbytes_per_sec": 0 00:10:15.932 }, 00:10:15.932 "claimed": true, 00:10:15.932 "claim_type": "exclusive_write", 00:10:15.932 "zoned": false, 00:10:15.932 "supported_io_types": { 00:10:15.932 "read": true, 00:10:15.932 "write": true, 00:10:15.932 "unmap": true, 00:10:15.932 "write_zeroes": true, 00:10:15.932 "flush": true, 00:10:15.932 "reset": true, 00:10:15.933 "compare": false, 00:10:15.933 "compare_and_write": false, 00:10:15.933 "abort": true, 00:10:15.933 "nvme_admin": false, 00:10:15.933 "nvme_io": false 00:10:15.933 }, 00:10:15.933 "memory_domains": [ 00:10:15.933 { 00:10:15.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.933 "dma_device_type": 2 00:10:15.933 } 00:10:15.933 ], 00:10:15.933 "driver_specific": {} 00:10:15.933 } 00:10:15.933 ] 00:10:15.933 06:02:24 -- common/autotest_common.sh@895 -- # return 0 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.933 06:02:24 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.191 06:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:16.191 "name": "Existed_Raid", 00:10:16.191 "uuid": "5da8026f-10ee-11ef-ba60-3508ead7bdda", 00:10:16.191 "strip_size_kb": 0, 00:10:16.191 "state": "configuring", 00:10:16.191 "raid_level": "raid1", 00:10:16.191 "superblock": true, 00:10:16.191 "num_base_bdevs": 4, 00:10:16.191 "num_base_bdevs_discovered": 2, 00:10:16.191 "num_base_bdevs_operational": 4, 00:10:16.191 "base_bdevs_list": [ 00:10:16.191 { 00:10:16.191 "name": "BaseBdev1", 00:10:16.191 "uuid": "5d59df49-10ee-11ef-ba60-3508ead7bdda", 00:10:16.191 "is_configured": true, 00:10:16.191 "data_offset": 2048, 00:10:16.191 "data_size": 63488 00:10:16.191 }, 00:10:16.191 { 00:10:16.191 "name": "BaseBdev2", 00:10:16.191 "uuid": "5e02f724-10ee-11ef-ba60-3508ead7bdda", 00:10:16.191 "is_configured": true, 00:10:16.191 "data_offset": 2048, 00:10:16.191 "data_size": 63488 00:10:16.191 }, 00:10:16.191 { 00:10:16.191 "name": "BaseBdev3", 00:10:16.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.191 "is_configured": false, 00:10:16.191 "data_offset": 0, 00:10:16.191 "data_size": 0 00:10:16.191 }, 00:10:16.191 { 00:10:16.191 "name": "BaseBdev4", 00:10:16.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.191 "is_configured": false, 00:10:16.191 "data_offset": 0, 00:10:16.191 "data_size": 0 00:10:16.191 } 00:10:16.191 ] 00:10:16.191 }' 00:10:16.191 06:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:16.191 06:02:24 -- common/autotest_common.sh@10 -- # set +x 00:10:16.450 06:02:24 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.450 [2024-05-13 06:02:24.689024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.450 BaseBdev3 00:10:16.450 06:02:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:16.450 06:02:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:10:16.450 06:02:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:16.450 06:02:24 -- common/autotest_common.sh@889 -- # local i 00:10:16.450 06:02:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:16.450 06:02:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:16.450 06:02:24 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:16.708 06:02:24 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.968 [ 00:10:16.968 { 00:10:16.968 "name": "BaseBdev3", 00:10:16.968 "aliases": [ 00:10:16.968 "5e9578c8-10ee-11ef-ba60-3508ead7bdda" 00:10:16.968 ], 00:10:16.968 "product_name": "Malloc disk", 00:10:16.968 "block_size": 512, 00:10:16.968 "num_blocks": 65536, 00:10:16.968 "uuid": "5e9578c8-10ee-11ef-ba60-3508ead7bdda", 00:10:16.968 "assigned_rate_limits": { 00:10:16.968 "rw_ios_per_sec": 0, 00:10:16.968 "rw_mbytes_per_sec": 0, 00:10:16.968 "r_mbytes_per_sec": 0, 00:10:16.968 "w_mbytes_per_sec": 0 00:10:16.968 }, 00:10:16.968 "claimed": true, 00:10:16.968 "claim_type": "exclusive_write", 00:10:16.968 "zoned": false, 00:10:16.968 "supported_io_types": { 00:10:16.968 "read": true, 00:10:16.968 "write": true, 00:10:16.968 "unmap": true, 00:10:16.968 "write_zeroes": true, 00:10:16.968 "flush": true, 00:10:16.968 "reset": true, 00:10:16.968 "compare": false, 00:10:16.968 "compare_and_write": false, 00:10:16.968 "abort": true, 00:10:16.968 "nvme_admin": false, 00:10:16.968 "nvme_io": false 00:10:16.968 }, 00:10:16.968 "memory_domains": [ 00:10:16.968 { 00:10:16.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.968 "dma_device_type": 2 00:10:16.968 } 00:10:16.968 ], 00:10:16.968 "driver_specific": {} 00:10:16.968 } 00:10:16.968 ] 00:10:16.968 06:02:25 -- common/autotest_common.sh@895 -- # return 0 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:16.968 "name": "Existed_Raid", 00:10:16.968 "uuid": "5da8026f-10ee-11ef-ba60-3508ead7bdda", 00:10:16.968 "strip_size_kb": 0, 00:10:16.968 "state": "configuring", 00:10:16.968 "raid_level": "raid1", 00:10:16.968 "superblock": true, 00:10:16.968 "num_base_bdevs": 4, 00:10:16.968 "num_base_bdevs_discovered": 3, 00:10:16.968 "num_base_bdevs_operational": 4, 00:10:16.968 "base_bdevs_list": [ 00:10:16.968 { 00:10:16.968 "name": "BaseBdev1", 00:10:16.968 "uuid": "5d59df49-10ee-11ef-ba60-3508ead7bdda", 00:10:16.968 "is_configured": true, 00:10:16.968 "data_offset": 2048, 00:10:16.968 "data_size": 63488 00:10:16.968 }, 00:10:16.968 { 00:10:16.968 "name": "BaseBdev2", 00:10:16.968 "uuid": "5e02f724-10ee-11ef-ba60-3508ead7bdda", 00:10:16.968 "is_configured": true, 00:10:16.968 "data_offset": 2048, 00:10:16.968 "data_size": 63488 00:10:16.968 }, 00:10:16.968 { 00:10:16.968 "name": "BaseBdev3", 00:10:16.968 "uuid": "5e9578c8-10ee-11ef-ba60-3508ead7bdda", 00:10:16.968 "is_configured": true, 00:10:16.968 "data_offset": 2048, 00:10:16.968 "data_size": 63488 00:10:16.968 }, 00:10:16.968 { 00:10:16.968 "name": "BaseBdev4", 00:10:16.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.968 "is_configured": false, 00:10:16.968 "data_offset": 0, 00:10:16.968 "data_size": 0 00:10:16.968 } 00:10:16.968 ] 00:10:16.968 }' 00:10:16.968 06:02:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:16.968 06:02:25 -- common/autotest_common.sh@10 -- # set +x 00:10:17.228 06:02:25 -- bdev/bdev_raid.sh@256 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:10:17.487 [2024-05-13 06:02:25.641195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.487 [2024-05-13 06:02:25.641249] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82b417a00 00:10:17.487 [2024-05-13 06:02:25.641253] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.487 [2024-05-13 06:02:25.641268] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82b47aec0 00:10:17.487 [2024-05-13 06:02:25.641303] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82b417a00 00:10:17.487 [2024-05-13 06:02:25.641306] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x82b417a00 00:10:17.487 [2024-05-13 06:02:25.641320] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.487 BaseBdev4 00:10:17.487 06:02:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:10:17.487 06:02:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:10:17.487 06:02:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:17.487 06:02:25 -- common/autotest_common.sh@889 -- # local i 00:10:17.487 06:02:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:17.487 06:02:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:17.487 06:02:25 -- common/autotest_common.sh@892 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:17.746 06:02:25 -- common/autotest_common.sh@894 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:17.746 [ 00:10:17.746 { 00:10:17.746 "name": "BaseBdev4", 00:10:17.746 "aliases": [ 00:10:17.746 "5f26c2de-10ee-11ef-ba60-3508ead7bdda" 00:10:17.746 ], 00:10:17.746 "product_name": "Malloc disk", 00:10:17.746 "block_size": 512, 00:10:17.746 "num_blocks": 65536, 00:10:17.746 "uuid": "5f26c2de-10ee-11ef-ba60-3508ead7bdda", 00:10:17.746 "assigned_rate_limits": { 00:10:17.746 "rw_ios_per_sec": 0, 00:10:17.746 "rw_mbytes_per_sec": 0, 00:10:17.746 "r_mbytes_per_sec": 0, 00:10:17.746 "w_mbytes_per_sec": 0 00:10:17.746 }, 00:10:17.746 "claimed": true, 00:10:17.746 "claim_type": "exclusive_write", 00:10:17.746 "zoned": false, 00:10:17.746 "supported_io_types": { 00:10:17.746 "read": true, 00:10:17.746 "write": true, 00:10:17.746 "unmap": true, 00:10:17.746 "write_zeroes": true, 00:10:17.746 "flush": true, 00:10:17.746 "reset": true, 00:10:17.746 "compare": false, 00:10:17.746 "compare_and_write": false, 00:10:17.746 "abort": true, 00:10:17.746 "nvme_admin": false, 00:10:17.746 "nvme_io": false 00:10:17.746 }, 00:10:17.746 "memory_domains": [ 00:10:17.746 { 00:10:17.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.746 "dma_device_type": 2 00:10:17.746 } 00:10:17.746 ], 00:10:17.746 "driver_specific": {} 00:10:17.746 } 00:10:17.746 ] 00:10:17.746 06:02:25 -- common/autotest_common.sh@895 -- # return 0 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:17.746 06:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.012 06:02:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:18.012 "name": "Existed_Raid", 00:10:18.012 "uuid": "5da8026f-10ee-11ef-ba60-3508ead7bdda", 00:10:18.012 "strip_size_kb": 0, 00:10:18.012 "state": "online", 00:10:18.012 "raid_level": "raid1", 00:10:18.012 "superblock": true, 00:10:18.012 "num_base_bdevs": 4, 00:10:18.012 "num_base_bdevs_discovered": 4, 00:10:18.012 "num_base_bdevs_operational": 4, 00:10:18.012 "base_bdevs_list": [ 00:10:18.012 { 00:10:18.012 "name": "BaseBdev1", 00:10:18.012 "uuid": "5d59df49-10ee-11ef-ba60-3508ead7bdda", 00:10:18.012 "is_configured": true, 00:10:18.012 "data_offset": 2048, 00:10:18.012 "data_size": 63488 00:10:18.012 }, 00:10:18.012 { 00:10:18.012 "name": "BaseBdev2", 00:10:18.012 "uuid": "5e02f724-10ee-11ef-ba60-3508ead7bdda", 00:10:18.012 "is_configured": true, 00:10:18.012 "data_offset": 2048, 00:10:18.012 "data_size": 63488 00:10:18.012 }, 00:10:18.012 { 00:10:18.012 "name": "BaseBdev3", 00:10:18.012 "uuid": "5e9578c8-10ee-11ef-ba60-3508ead7bdda", 00:10:18.012 "is_configured": true, 00:10:18.012 "data_offset": 2048, 00:10:18.012 "data_size": 63488 00:10:18.012 }, 00:10:18.012 { 00:10:18.012 "name": "BaseBdev4", 00:10:18.012 "uuid": "5f26c2de-10ee-11ef-ba60-3508ead7bdda", 00:10:18.012 "is_configured": true, 00:10:18.012 "data_offset": 2048, 00:10:18.012 "data_size": 63488 00:10:18.012 } 00:10:18.012 ] 00:10:18.012 }' 00:10:18.012 06:02:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:18.012 06:02:26 -- common/autotest_common.sh@10 -- # set +x 00:10:18.275 06:02:26 -- bdev/bdev_raid.sh@262 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:18.275 [2024-05-13 06:02:26.589279] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:18.534 "name": "Existed_Raid", 00:10:18.534 "uuid": "5da8026f-10ee-11ef-ba60-3508ead7bdda", 00:10:18.534 "strip_size_kb": 0, 00:10:18.534 "state": "online", 00:10:18.534 "raid_level": "raid1", 00:10:18.534 "superblock": true, 00:10:18.534 "num_base_bdevs": 4, 00:10:18.534 "num_base_bdevs_discovered": 3, 00:10:18.534 "num_base_bdevs_operational": 3, 00:10:18.534 "base_bdevs_list": [ 00:10:18.534 { 00:10:18.534 "name": null, 00:10:18.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.534 "is_configured": false, 00:10:18.534 "data_offset": 2048, 00:10:18.534 "data_size": 63488 00:10:18.534 }, 00:10:18.534 { 00:10:18.534 "name": "BaseBdev2", 00:10:18.534 "uuid": "5e02f724-10ee-11ef-ba60-3508ead7bdda", 00:10:18.534 "is_configured": true, 00:10:18.534 "data_offset": 2048, 00:10:18.534 "data_size": 63488 00:10:18.534 }, 00:10:18.534 { 00:10:18.534 "name": "BaseBdev3", 00:10:18.534 "uuid": "5e9578c8-10ee-11ef-ba60-3508ead7bdda", 00:10:18.534 "is_configured": true, 00:10:18.534 "data_offset": 2048, 00:10:18.534 "data_size": 63488 00:10:18.534 }, 00:10:18.534 { 00:10:18.534 "name": "BaseBdev4", 00:10:18.534 "uuid": "5f26c2de-10ee-11ef-ba60-3508ead7bdda", 00:10:18.534 "is_configured": true, 00:10:18.534 "data_offset": 2048, 00:10:18.534 "data_size": 63488 00:10:18.534 } 00:10:18.534 ] 00:10:18.534 }' 00:10:18.534 06:02:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:18.534 06:02:26 -- common/autotest_common.sh@10 -- # set +x 00:10:18.793 06:02:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:18.793 06:02:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:18.793 06:02:27 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.793 06:02:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:19.052 06:02:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:19.052 06:02:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.052 06:02:27 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:19.311 [2024-05-13 06:02:27.382056] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.311 06:02:27 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:19.571 [2024-05-13 06:02:27.730750] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:19.571 06:02:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:19.571 06:02:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:19.571 06:02:27 -- bdev/bdev_raid.sh@274 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.571 06:02:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:19.830 06:02:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:19.830 06:02:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.830 06:02:27 -- bdev/bdev_raid.sh@279 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:10:19.830 [2024-05-13 06:02:28.083433] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:19.830 [2024-05-13 06:02:28.083450] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.830 [2024-05-13 06:02:28.083458] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.830 [2024-05-13 06:02:28.088157] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.830 [2024-05-13 06:02:28.088175] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82b417a00 name Existed_Raid, state offline 00:10:19.830 06:02:28 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:19.830 06:02:28 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:19.830 06:02:28 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:19.830 06:02:28 -- bdev/bdev_raid.sh@281 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.089 06:02:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:20.089 06:02:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:20.089 06:02:28 -- bdev/bdev_raid.sh@287 -- # killprocess 53162 00:10:20.089 06:02:28 -- common/autotest_common.sh@926 -- # '[' -z 53162 ']' 00:10:20.089 06:02:28 -- common/autotest_common.sh@930 -- # kill -0 53162 00:10:20.089 06:02:28 -- common/autotest_common.sh@931 -- # uname 00:10:20.089 06:02:28 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:20.089 06:02:28 -- common/autotest_common.sh@934 -- # tail -1 00:10:20.089 06:02:28 -- common/autotest_common.sh@934 -- # ps -c -o command 53162 00:10:20.089 06:02:28 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:10:20.089 06:02:28 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:10:20.089 killing process with pid 53162 00:10:20.089 06:02:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53162' 00:10:20.089 06:02:28 -- common/autotest_common.sh@945 -- # kill 53162 00:10:20.089 [2024-05-13 06:02:28.294577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.089 [2024-05-13 06:02:28.294609] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.089 06:02:28 -- common/autotest_common.sh@950 -- # wait 53162 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:20.349 00:10:20.349 real 0m9.066s 00:10:20.349 user 0m15.760s 00:10:20.349 sys 0m1.653s 00:10:20.349 06:02:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.349 06:02:28 -- common/autotest_common.sh@10 -- # set +x 00:10:20.349 ************************************ 00:10:20.349 END TEST raid_state_function_test_sb 00:10:20.349 ************************************ 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:20.349 06:02:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:20.349 06:02:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.349 06:02:28 -- common/autotest_common.sh@10 -- # set +x 00:10:20.349 ************************************ 00:10:20.349 START TEST raid_superblock_test 00:10:20.349 ************************************ 00:10:20.349 06:02:28 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=53435 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53435 /var/tmp/spdk-raid.sock 00:10:20.349 06:02:28 -- bdev/bdev_raid.sh@356 -- # /usr/home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:20.349 06:02:28 -- common/autotest_common.sh@819 -- # '[' -z 53435 ']' 00:10:20.349 06:02:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:20.349 06:02:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:20.349 06:02:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:20.349 06:02:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.349 06:02:28 -- common/autotest_common.sh@10 -- # set +x 00:10:20.349 [2024-05-13 06:02:28.505204] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:20.349 [2024-05-13 06:02:28.505556] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:20.608 EAL: TSC is not safe to use in SMP mode 00:10:20.608 EAL: TSC is not invariant 00:10:20.608 [2024-05-13 06:02:28.921009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.868 [2024-05-13 06:02:29.007678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.868 [2024-05-13 06:02:29.008094] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.868 [2024-05-13 06:02:29.008106] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.127 06:02:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:21.127 06:02:29 -- common/autotest_common.sh@852 -- # return 0 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:21.127 06:02:29 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:21.386 malloc1 00:10:21.386 06:02:29 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.645 [2024-05-13 06:02:29.711192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.645 [2024-05-13 06:02:29.711234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.645 [2024-05-13 06:02:29.711760] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abf9780 00:10:21.645 [2024-05-13 06:02:29.711785] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.645 [2024-05-13 06:02:29.712423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.645 [2024-05-13 06:02:29.712462] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.645 pt1 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:21.645 malloc2 00:10:21.645 06:02:29 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.905 [2024-05-13 06:02:30.035233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.905 [2024-05-13 06:02:30.035276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.905 [2024-05-13 06:02:30.035314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abf9c80 00:10:21.905 [2024-05-13 06:02:30.035320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.905 [2024-05-13 06:02:30.035726] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.905 [2024-05-13 06:02:30.035757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.905 pt2 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:21.905 malloc3 00:10:21.905 06:02:30 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.164 [2024-05-13 06:02:30.355277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.164 [2024-05-13 06:02:30.355321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.164 [2024-05-13 06:02:30.355359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa180 00:10:22.164 [2024-05-13 06:02:30.355365] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.164 [2024-05-13 06:02:30.355746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.164 [2024-05-13 06:02:30.355778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.164 pt3 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.164 06:02:30 -- bdev/bdev_raid.sh@370 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:10:22.423 malloc4 00:10:22.423 06:02:30 -- bdev/bdev_raid.sh@371 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:22.423 [2024-05-13 06:02:30.691321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:22.423 [2024-05-13 06:02:30.691380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.423 [2024-05-13 06:02:30.691401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa680 00:10:22.423 [2024-05-13 06:02:30.691407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.423 [2024-05-13 06:02:30.691794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.423 [2024-05-13 06:02:30.691824] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:22.423 pt4 00:10:22.423 06:02:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:22.423 06:02:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:22.423 06:02:30 -- bdev/bdev_raid.sh@375 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:10:22.682 [2024-05-13 06:02:30.851349] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:22.682 [2024-05-13 06:02:30.851711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.682 [2024-05-13 06:02:30.851736] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.683 [2024-05-13 06:02:30.851744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:22.683 [2024-05-13 06:02:30.851792] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abfa900 00:10:22.683 [2024-05-13 06:02:30.851798] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.683 [2024-05-13 06:02:30.851823] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac5ce20 00:10:22.683 [2024-05-13 06:02:30.851872] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abfa900 00:10:22.683 [2024-05-13 06:02:30.851878] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abfa900 00:10:22.683 [2024-05-13 06:02:30.851912] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.683 06:02:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.942 06:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:22.942 "name": "raid_bdev1", 00:10:22.942 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:22.942 "strip_size_kb": 0, 00:10:22.942 "state": "online", 00:10:22.942 "raid_level": "raid1", 00:10:22.942 "superblock": true, 00:10:22.942 "num_base_bdevs": 4, 00:10:22.942 "num_base_bdevs_discovered": 4, 00:10:22.942 "num_base_bdevs_operational": 4, 00:10:22.942 "base_bdevs_list": [ 00:10:22.942 { 00:10:22.942 "name": "pt1", 00:10:22.942 "uuid": "e2af3821-8954-2a5c-b35a-05b81461bf43", 00:10:22.942 "is_configured": true, 00:10:22.942 "data_offset": 2048, 00:10:22.942 "data_size": 63488 00:10:22.942 }, 00:10:22.942 { 00:10:22.942 "name": "pt2", 00:10:22.942 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:22.942 "is_configured": true, 00:10:22.942 "data_offset": 2048, 00:10:22.942 "data_size": 63488 00:10:22.942 }, 00:10:22.942 { 00:10:22.942 "name": "pt3", 00:10:22.942 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:22.942 "is_configured": true, 00:10:22.942 "data_offset": 2048, 00:10:22.942 "data_size": 63488 00:10:22.942 }, 00:10:22.942 { 00:10:22.942 "name": "pt4", 00:10:22.942 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:22.942 "is_configured": true, 00:10:22.942 "data_offset": 2048, 00:10:22.942 "data_size": 63488 00:10:22.942 } 00:10:22.942 ] 00:10:22.942 }' 00:10:22.942 06:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:22.942 06:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:23.201 06:02:31 -- bdev/bdev_raid.sh@379 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:23.201 06:02:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:23.201 [2024-05-13 06:02:31.439441] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.201 06:02:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6241c6a2-10ee-11ef-ba60-3508ead7bdda 00:10:23.201 06:02:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 6241c6a2-10ee-11ef-ba60-3508ead7bdda ']' 00:10:23.201 06:02:31 -- bdev/bdev_raid.sh@385 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:23.461 [2024-05-13 06:02:31.615438] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.461 [2024-05-13 06:02:31.615453] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.461 [2024-05-13 06:02:31.615464] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.461 [2024-05-13 06:02:31.615477] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.461 [2024-05-13 06:02:31.615480] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abfa900 name raid_bdev1, state offline 00:10:23.461 06:02:31 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:23.461 06:02:31 -- bdev/bdev_raid.sh@386 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.720 06:02:31 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:23.720 06:02:31 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:23.720 06:02:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.720 06:02:31 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:23.720 06:02:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.720 06:02:31 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:23.980 06:02:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.980 06:02:32 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:24.240 06:02:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:24.240 06:02:32 -- bdev/bdev_raid.sh@393 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:10:24.240 06:02:32 -- bdev/bdev_raid.sh@395 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:24.240 06:02:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:24.499 06:02:32 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:24.499 06:02:32 -- bdev/bdev_raid.sh@401 -- # NOT /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:10:24.499 06:02:32 -- common/autotest_common.sh@640 -- # local es=0 00:10:24.499 06:02:32 -- common/autotest_common.sh@642 -- # valid_exec_arg /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:10:24.499 06:02:32 -- common/autotest_common.sh@628 -- # local arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.499 06:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:24.499 06:02:32 -- common/autotest_common.sh@632 -- # type -t /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.499 06:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:24.499 06:02:32 -- common/autotest_common.sh@634 -- # type -P /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.499 06:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:24.499 06:02:32 -- common/autotest_common.sh@634 -- # arg=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.499 06:02:32 -- common/autotest_common.sh@634 -- # [[ -x /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:24.499 06:02:32 -- common/autotest_common.sh@643 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:10:24.759 [2024-05-13 06:02:32.827611] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:24.759 [2024-05-13 06:02:32.828061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:24.759 [2024-05-13 06:02:32.828081] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:24.759 [2024-05-13 06:02:32.828088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:24.759 [2024-05-13 06:02:32.828098] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:24.759 [2024-05-13 06:02:32.828129] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:24.760 [2024-05-13 06:02:32.828145] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:10:24.760 [2024-05-13 06:02:32.828153] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:10:24.760 [2024-05-13 06:02:32.828160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:24.760 [2024-05-13 06:02:32.828163] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abfa680 name raid_bdev1, state configuring 00:10:24.760 request: 00:10:24.760 { 00:10:24.760 "name": "raid_bdev1", 00:10:24.760 "raid_level": "raid1", 00:10:24.760 "base_bdevs": [ 00:10:24.760 "malloc1", 00:10:24.760 "malloc2", 00:10:24.760 "malloc3", 00:10:24.760 "malloc4" 00:10:24.760 ], 00:10:24.760 "superblock": false, 00:10:24.760 "method": "bdev_raid_create", 00:10:24.760 "req_id": 1 00:10:24.760 } 00:10:24.760 Got JSON-RPC error response 00:10:24.760 response: 00:10:24.760 { 00:10:24.760 "code": -17, 00:10:24.760 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:24.760 } 00:10:24.760 06:02:32 -- common/autotest_common.sh@643 -- # es=1 00:10:24.760 06:02:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:24.760 06:02:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:24.760 06:02:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:24.760 06:02:32 -- bdev/bdev_raid.sh@403 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.760 06:02:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:24.760 06:02:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:24.760 06:02:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:24.760 06:02:32 -- bdev/bdev_raid.sh@409 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.019 [2024-05-13 06:02:33.135663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.019 [2024-05-13 06:02:33.135718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.019 [2024-05-13 06:02:33.135743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa180 00:10:25.019 [2024-05-13 06:02:33.135749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.019 [2024-05-13 06:02:33.136210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.019 [2024-05-13 06:02:33.136244] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.019 [2024-05-13 06:02:33.136261] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:25.019 [2024-05-13 06:02:33.136270] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.019 pt1 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:25.019 06:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:25.020 "name": "raid_bdev1", 00:10:25.020 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:25.020 "strip_size_kb": 0, 00:10:25.020 "state": "configuring", 00:10:25.020 "raid_level": "raid1", 00:10:25.020 "superblock": true, 00:10:25.020 "num_base_bdevs": 4, 00:10:25.020 "num_base_bdevs_discovered": 1, 00:10:25.020 "num_base_bdevs_operational": 4, 00:10:25.020 "base_bdevs_list": [ 00:10:25.020 { 00:10:25.020 "name": "pt1", 00:10:25.020 "uuid": "e2af3821-8954-2a5c-b35a-05b81461bf43", 00:10:25.020 "is_configured": true, 00:10:25.020 "data_offset": 2048, 00:10:25.020 "data_size": 63488 00:10:25.020 }, 00:10:25.020 { 00:10:25.020 "name": null, 00:10:25.020 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:25.020 "is_configured": false, 00:10:25.020 "data_offset": 2048, 00:10:25.020 "data_size": 63488 00:10:25.020 }, 00:10:25.020 { 00:10:25.020 "name": null, 00:10:25.020 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:25.020 "is_configured": false, 00:10:25.020 "data_offset": 2048, 00:10:25.020 "data_size": 63488 00:10:25.020 }, 00:10:25.020 { 00:10:25.020 "name": null, 00:10:25.020 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:25.020 "is_configured": false, 00:10:25.020 "data_offset": 2048, 00:10:25.020 "data_size": 63488 00:10:25.020 } 00:10:25.020 ] 00:10:25.020 }' 00:10:25.020 06:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:25.020 06:02:33 -- common/autotest_common.sh@10 -- # set +x 00:10:25.590 06:02:33 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:10:25.590 06:02:33 -- bdev/bdev_raid.sh@416 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.590 [2024-05-13 06:02:33.763741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.590 [2024-05-13 06:02:33.763779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.590 [2024-05-13 06:02:33.763818] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abf9780 00:10:25.590 [2024-05-13 06:02:33.763826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.590 [2024-05-13 06:02:33.763895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.590 [2024-05-13 06:02:33.763903] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.591 [2024-05-13 06:02:33.763916] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:25.591 [2024-05-13 06:02:33.763921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.591 pt2 00:10:25.591 06:02:33 -- bdev/bdev_raid.sh@417 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:25.851 [2024-05-13 06:02:33.939764] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.851 06:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.851 06:02:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:25.851 "name": "raid_bdev1", 00:10:25.851 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:25.851 "strip_size_kb": 0, 00:10:25.851 "state": "configuring", 00:10:25.851 "raid_level": "raid1", 00:10:25.851 "superblock": true, 00:10:25.851 "num_base_bdevs": 4, 00:10:25.851 "num_base_bdevs_discovered": 1, 00:10:25.851 "num_base_bdevs_operational": 4, 00:10:25.851 "base_bdevs_list": [ 00:10:25.851 { 00:10:25.851 "name": "pt1", 00:10:25.851 "uuid": "e2af3821-8954-2a5c-b35a-05b81461bf43", 00:10:25.851 "is_configured": true, 00:10:25.851 "data_offset": 2048, 00:10:25.851 "data_size": 63488 00:10:25.851 }, 00:10:25.851 { 00:10:25.851 "name": null, 00:10:25.851 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:25.851 "is_configured": false, 00:10:25.851 "data_offset": 2048, 00:10:25.851 "data_size": 63488 00:10:25.851 }, 00:10:25.851 { 00:10:25.851 "name": null, 00:10:25.851 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:25.851 "is_configured": false, 00:10:25.851 "data_offset": 2048, 00:10:25.851 "data_size": 63488 00:10:25.851 }, 00:10:25.851 { 00:10:25.851 "name": null, 00:10:25.851 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:25.851 "is_configured": false, 00:10:25.851 "data_offset": 2048, 00:10:25.851 "data_size": 63488 00:10:25.851 } 00:10:25.851 ] 00:10:25.851 }' 00:10:25.851 06:02:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:25.851 06:02:34 -- common/autotest_common.sh@10 -- # set +x 00:10:26.111 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:26.111 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:26.111 06:02:34 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.370 [2024-05-13 06:02:34.543842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.370 [2024-05-13 06:02:34.543895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.370 [2024-05-13 06:02:34.543916] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abf9780 00:10:26.370 [2024-05-13 06:02:34.543922] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.370 [2024-05-13 06:02:34.543988] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.370 [2024-05-13 06:02:34.543995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.370 [2024-05-13 06:02:34.544021] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:26.370 [2024-05-13 06:02:34.544026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.370 pt2 00:10:26.370 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:26.370 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:26.370 06:02:34 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.629 [2024-05-13 06:02:34.715863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.629 [2024-05-13 06:02:34.715897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.629 [2024-05-13 06:02:34.715928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfab80 00:10:26.629 [2024-05-13 06:02:34.715934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.629 [2024-05-13 06:02:34.715985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.629 [2024-05-13 06:02:34.715992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.629 [2024-05-13 06:02:34.716004] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:26.629 [2024-05-13 06:02:34.716009] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.629 pt3 00:10:26.629 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:26.629 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:26.629 06:02:34 -- bdev/bdev_raid.sh@423 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:26.629 [2024-05-13 06:02:34.879884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:26.629 [2024-05-13 06:02:34.879918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.630 [2024-05-13 06:02:34.879947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa900 00:10:26.630 [2024-05-13 06:02:34.879953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.630 [2024-05-13 06:02:34.879998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.630 [2024-05-13 06:02:34.880005] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:26.630 [2024-05-13 06:02:34.880017] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:10:26.630 [2024-05-13 06:02:34.880023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:26.630 [2024-05-13 06:02:34.880041] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abf9c80 00:10:26.630 [2024-05-13 06:02:34.880045] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:26.630 [2024-05-13 06:02:34.880075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac5ce20 00:10:26.630 [2024-05-13 06:02:34.880110] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abf9c80 00:10:26.630 [2024-05-13 06:02:34.880114] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abf9c80 00:10:26.630 [2024-05-13 06:02:34.880129] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.630 pt4 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.630 06:02:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.889 06:02:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:26.889 "name": "raid_bdev1", 00:10:26.889 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:26.889 "strip_size_kb": 0, 00:10:26.889 "state": "online", 00:10:26.889 "raid_level": "raid1", 00:10:26.889 "superblock": true, 00:10:26.889 "num_base_bdevs": 4, 00:10:26.889 "num_base_bdevs_discovered": 4, 00:10:26.889 "num_base_bdevs_operational": 4, 00:10:26.889 "base_bdevs_list": [ 00:10:26.889 { 00:10:26.889 "name": "pt1", 00:10:26.889 "uuid": "e2af3821-8954-2a5c-b35a-05b81461bf43", 00:10:26.889 "is_configured": true, 00:10:26.889 "data_offset": 2048, 00:10:26.889 "data_size": 63488 00:10:26.889 }, 00:10:26.889 { 00:10:26.889 "name": "pt2", 00:10:26.889 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:26.889 "is_configured": true, 00:10:26.889 "data_offset": 2048, 00:10:26.889 "data_size": 63488 00:10:26.889 }, 00:10:26.889 { 00:10:26.889 "name": "pt3", 00:10:26.889 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:26.889 "is_configured": true, 00:10:26.889 "data_offset": 2048, 00:10:26.889 "data_size": 63488 00:10:26.889 }, 00:10:26.889 { 00:10:26.889 "name": "pt4", 00:10:26.889 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:26.889 "is_configured": true, 00:10:26.889 "data_offset": 2048, 00:10:26.889 "data_size": 63488 00:10:26.889 } 00:10:26.889 ] 00:10:26.889 }' 00:10:26.889 06:02:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:26.889 06:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:27.148 06:02:35 -- bdev/bdev_raid.sh@430 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:27.148 06:02:35 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:27.407 [2024-05-13 06:02:35.483982] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@430 -- # '[' 6241c6a2-10ee-11ef-ba60-3508ead7bdda '!=' 6241c6a2-10ee-11ef-ba60-3508ead7bdda ']' 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@436 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:27.407 [2024-05-13 06:02:35.660001] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.407 06:02:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.665 06:02:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:27.665 "name": "raid_bdev1", 00:10:27.665 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:27.665 "strip_size_kb": 0, 00:10:27.665 "state": "online", 00:10:27.665 "raid_level": "raid1", 00:10:27.665 "superblock": true, 00:10:27.665 "num_base_bdevs": 4, 00:10:27.665 "num_base_bdevs_discovered": 3, 00:10:27.665 "num_base_bdevs_operational": 3, 00:10:27.665 "base_bdevs_list": [ 00:10:27.665 { 00:10:27.665 "name": null, 00:10:27.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.665 "is_configured": false, 00:10:27.665 "data_offset": 2048, 00:10:27.665 "data_size": 63488 00:10:27.665 }, 00:10:27.665 { 00:10:27.665 "name": "pt2", 00:10:27.665 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:27.665 "is_configured": true, 00:10:27.665 "data_offset": 2048, 00:10:27.665 "data_size": 63488 00:10:27.665 }, 00:10:27.665 { 00:10:27.665 "name": "pt3", 00:10:27.665 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:27.665 "is_configured": true, 00:10:27.665 "data_offset": 2048, 00:10:27.666 "data_size": 63488 00:10:27.666 }, 00:10:27.666 { 00:10:27.666 "name": "pt4", 00:10:27.666 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:27.666 "is_configured": true, 00:10:27.666 "data_offset": 2048, 00:10:27.666 "data_size": 63488 00:10:27.666 } 00:10:27.666 ] 00:10:27.666 }' 00:10:27.666 06:02:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:27.666 06:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:27.924 06:02:36 -- bdev/bdev_raid.sh@442 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:28.183 [2024-05-13 06:02:36.256066] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:28.183 [2024-05-13 06:02:36.256082] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.183 [2024-05-13 06:02:36.256090] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.183 [2024-05-13 06:02:36.256101] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.183 [2024-05-13 06:02:36.256104] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abf9c80 name raid_bdev1, state offline 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@443 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:28.183 06:02:36 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:28.441 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:10:28.441 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:28.441 06:02:36 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@450 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:10:28.701 06:02:36 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:28.960 [2024-05-13 06:02:37.124190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:28.960 [2024-05-13 06:02:37.124235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.960 [2024-05-13 06:02:37.124272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa900 00:10:28.960 [2024-05-13 06:02:37.124279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.960 [2024-05-13 06:02:37.124780] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.960 [2024-05-13 06:02:37.124819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:28.960 [2024-05-13 06:02:37.124838] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:28.960 [2024-05-13 06:02:37.124860] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:28.960 pt2 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.960 06:02:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.219 06:02:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:29.219 "name": "raid_bdev1", 00:10:29.219 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:29.219 "strip_size_kb": 0, 00:10:29.219 "state": "configuring", 00:10:29.219 "raid_level": "raid1", 00:10:29.219 "superblock": true, 00:10:29.219 "num_base_bdevs": 4, 00:10:29.219 "num_base_bdevs_discovered": 1, 00:10:29.219 "num_base_bdevs_operational": 3, 00:10:29.219 "base_bdevs_list": [ 00:10:29.219 { 00:10:29.219 "name": null, 00:10:29.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.219 "is_configured": false, 00:10:29.219 "data_offset": 2048, 00:10:29.219 "data_size": 63488 00:10:29.219 }, 00:10:29.219 { 00:10:29.219 "name": "pt2", 00:10:29.219 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:29.219 "is_configured": true, 00:10:29.219 "data_offset": 2048, 00:10:29.219 "data_size": 63488 00:10:29.219 }, 00:10:29.219 { 00:10:29.219 "name": null, 00:10:29.219 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:29.219 "is_configured": false, 00:10:29.219 "data_offset": 2048, 00:10:29.219 "data_size": 63488 00:10:29.219 }, 00:10:29.219 { 00:10:29.219 "name": null, 00:10:29.219 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:29.219 "is_configured": false, 00:10:29.219 "data_offset": 2048, 00:10:29.219 "data_size": 63488 00:10:29.219 } 00:10:29.219 ] 00:10:29.219 }' 00:10:29.219 06:02:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:29.219 06:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@455 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:29.478 [2024-05-13 06:02:37.736269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:29.478 [2024-05-13 06:02:37.736327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.478 [2024-05-13 06:02:37.736351] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa680 00:10:29.478 [2024-05-13 06:02:37.736357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.478 [2024-05-13 06:02:37.736424] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.478 [2024-05-13 06:02:37.736432] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:29.478 [2024-05-13 06:02:37.736445] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:29.478 [2024-05-13 06:02:37.736451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:29.478 pt3 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:29.478 06:02:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.479 06:02:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.738 06:02:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:29.738 "name": "raid_bdev1", 00:10:29.738 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:29.738 "strip_size_kb": 0, 00:10:29.738 "state": "configuring", 00:10:29.738 "raid_level": "raid1", 00:10:29.738 "superblock": true, 00:10:29.738 "num_base_bdevs": 4, 00:10:29.738 "num_base_bdevs_discovered": 2, 00:10:29.738 "num_base_bdevs_operational": 3, 00:10:29.738 "base_bdevs_list": [ 00:10:29.738 { 00:10:29.738 "name": null, 00:10:29.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.738 "is_configured": false, 00:10:29.738 "data_offset": 2048, 00:10:29.738 "data_size": 63488 00:10:29.738 }, 00:10:29.738 { 00:10:29.738 "name": "pt2", 00:10:29.738 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:29.738 "is_configured": true, 00:10:29.738 "data_offset": 2048, 00:10:29.738 "data_size": 63488 00:10:29.738 }, 00:10:29.738 { 00:10:29.738 "name": "pt3", 00:10:29.738 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:29.738 "is_configured": true, 00:10:29.738 "data_offset": 2048, 00:10:29.738 "data_size": 63488 00:10:29.738 }, 00:10:29.738 { 00:10:29.738 "name": null, 00:10:29.738 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:29.738 "is_configured": false, 00:10:29.738 "data_offset": 2048, 00:10:29.738 "data_size": 63488 00:10:29.738 } 00:10:29.738 ] 00:10:29.738 }' 00:10:29.738 06:02:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:29.738 06:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:29.998 06:02:38 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:10:29.998 06:02:38 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:10:29.998 06:02:38 -- bdev/bdev_raid.sh@462 -- # i=3 00:10:29.998 06:02:38 -- bdev/bdev_raid.sh@463 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:30.258 [2024-05-13 06:02:38.332360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:30.258 [2024-05-13 06:02:38.332395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.258 [2024-05-13 06:02:38.332414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abf9c80 00:10:30.258 [2024-05-13 06:02:38.332420] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.258 [2024-05-13 06:02:38.332501] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.258 [2024-05-13 06:02:38.332509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:30.258 [2024-05-13 06:02:38.332522] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:10:30.258 [2024-05-13 06:02:38.332528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:30.258 [2024-05-13 06:02:38.332548] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abf9780 00:10:30.258 [2024-05-13 06:02:38.332551] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:30.258 [2024-05-13 06:02:38.332566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac5ce20 00:10:30.258 [2024-05-13 06:02:38.332612] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abf9780 00:10:30.258 [2024-05-13 06:02:38.332615] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abf9780 00:10:30.258 [2024-05-13 06:02:38.332631] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.258 pt4 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:30.258 "name": "raid_bdev1", 00:10:30.258 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:30.258 "strip_size_kb": 0, 00:10:30.258 "state": "online", 00:10:30.258 "raid_level": "raid1", 00:10:30.258 "superblock": true, 00:10:30.258 "num_base_bdevs": 4, 00:10:30.258 "num_base_bdevs_discovered": 3, 00:10:30.258 "num_base_bdevs_operational": 3, 00:10:30.258 "base_bdevs_list": [ 00:10:30.258 { 00:10:30.258 "name": null, 00:10:30.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.258 "is_configured": false, 00:10:30.258 "data_offset": 2048, 00:10:30.258 "data_size": 63488 00:10:30.258 }, 00:10:30.258 { 00:10:30.258 "name": "pt2", 00:10:30.258 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:30.258 "is_configured": true, 00:10:30.258 "data_offset": 2048, 00:10:30.258 "data_size": 63488 00:10:30.258 }, 00:10:30.258 { 00:10:30.258 "name": "pt3", 00:10:30.258 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:30.258 "is_configured": true, 00:10:30.258 "data_offset": 2048, 00:10:30.258 "data_size": 63488 00:10:30.258 }, 00:10:30.258 { 00:10:30.258 "name": "pt4", 00:10:30.258 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:30.258 "is_configured": true, 00:10:30.258 "data_offset": 2048, 00:10:30.258 "data_size": 63488 00:10:30.258 } 00:10:30.258 ] 00:10:30.258 }' 00:10:30.258 06:02:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:30.258 06:02:38 -- common/autotest_common.sh@10 -- # set +x 00:10:30.517 06:02:38 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:10:30.517 06:02:38 -- bdev/bdev_raid.sh@470 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:30.776 [2024-05-13 06:02:38.948455] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.776 [2024-05-13 06:02:38.948474] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.776 [2024-05-13 06:02:38.948487] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.776 [2024-05-13 06:02:38.948499] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.776 [2024-05-13 06:02:38.948502] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abf9780 name raid_bdev1, state offline 00:10:30.776 06:02:38 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:10:30.776 06:02:38 -- bdev/bdev_raid.sh@471 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@478 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:31.035 [2024-05-13 06:02:39.276514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:31.035 [2024-05-13 06:02:39.276568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.035 [2024-05-13 06:02:39.276590] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfab80 00:10:31.035 [2024-05-13 06:02:39.276596] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.035 [2024-05-13 06:02:39.277101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.035 [2024-05-13 06:02:39.277132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:31.035 [2024-05-13 06:02:39.277150] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:31.035 [2024-05-13 06:02:39.277159] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:31.035 pt1 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.035 06:02:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.295 06:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:31.295 "name": "raid_bdev1", 00:10:31.295 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:31.295 "strip_size_kb": 0, 00:10:31.295 "state": "configuring", 00:10:31.295 "raid_level": "raid1", 00:10:31.295 "superblock": true, 00:10:31.295 "num_base_bdevs": 4, 00:10:31.295 "num_base_bdevs_discovered": 1, 00:10:31.295 "num_base_bdevs_operational": 4, 00:10:31.295 "base_bdevs_list": [ 00:10:31.295 { 00:10:31.295 "name": "pt1", 00:10:31.295 "uuid": "e2af3821-8954-2a5c-b35a-05b81461bf43", 00:10:31.295 "is_configured": true, 00:10:31.295 "data_offset": 2048, 00:10:31.295 "data_size": 63488 00:10:31.295 }, 00:10:31.295 { 00:10:31.295 "name": null, 00:10:31.295 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:31.295 "is_configured": false, 00:10:31.295 "data_offset": 2048, 00:10:31.295 "data_size": 63488 00:10:31.295 }, 00:10:31.295 { 00:10:31.295 "name": null, 00:10:31.295 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:31.295 "is_configured": false, 00:10:31.295 "data_offset": 2048, 00:10:31.295 "data_size": 63488 00:10:31.295 }, 00:10:31.295 { 00:10:31.295 "name": null, 00:10:31.295 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:31.295 "is_configured": false, 00:10:31.295 "data_offset": 2048, 00:10:31.295 "data_size": 63488 00:10:31.295 } 00:10:31.295 ] 00:10:31.295 }' 00:10:31.295 06:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:31.295 06:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:31.555 06:02:39 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:10:31.555 06:02:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:31.555 06:02:39 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:31.814 06:02:39 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:10:31.814 06:02:39 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:31.814 06:02:39 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:31.814 06:02:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:10:31.814 06:02:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:31.814 06:02:40 -- bdev/bdev_raid.sh@485 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:10:32.080 06:02:40 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:10:32.080 06:02:40 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:10:32.080 06:02:40 -- bdev/bdev_raid.sh@489 -- # i=3 00:10:32.080 06:02:40 -- bdev/bdev_raid.sh@490 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:32.080 [2024-05-13 06:02:40.384657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:32.080 [2024-05-13 06:02:40.384697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.080 [2024-05-13 06:02:40.384719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abf9c80 00:10:32.080 [2024-05-13 06:02:40.384725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.080 [2024-05-13 06:02:40.384814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.080 [2024-05-13 06:02:40.384822] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:32.080 [2024-05-13 06:02:40.384835] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:10:32.080 [2024-05-13 06:02:40.384839] bdev_raid.c:3239:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:32.080 [2024-05-13 06:02:40.384841] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.080 [2024-05-13 06:02:40.384845] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abfa180 name raid_bdev1, state configuring 00:10:32.080 [2024-05-13 06:02:40.384855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:32.080 pt4 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.348 06:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:32.348 "name": "raid_bdev1", 00:10:32.348 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:32.348 "strip_size_kb": 0, 00:10:32.348 "state": "configuring", 00:10:32.348 "raid_level": "raid1", 00:10:32.348 "superblock": true, 00:10:32.349 "num_base_bdevs": 4, 00:10:32.349 "num_base_bdevs_discovered": 1, 00:10:32.349 "num_base_bdevs_operational": 3, 00:10:32.349 "base_bdevs_list": [ 00:10:32.349 { 00:10:32.349 "name": null, 00:10:32.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.349 "is_configured": false, 00:10:32.349 "data_offset": 2048, 00:10:32.349 "data_size": 63488 00:10:32.349 }, 00:10:32.349 { 00:10:32.349 "name": null, 00:10:32.349 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:32.349 "is_configured": false, 00:10:32.349 "data_offset": 2048, 00:10:32.349 "data_size": 63488 00:10:32.349 }, 00:10:32.349 { 00:10:32.349 "name": null, 00:10:32.349 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:32.349 "is_configured": false, 00:10:32.349 "data_offset": 2048, 00:10:32.349 "data_size": 63488 00:10:32.349 }, 00:10:32.349 { 00:10:32.349 "name": "pt4", 00:10:32.349 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:32.349 "is_configured": true, 00:10:32.349 "data_offset": 2048, 00:10:32.349 "data_size": 63488 00:10:32.349 } 00:10:32.349 ] 00:10:32.349 }' 00:10:32.349 06:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:32.349 06:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:32.608 06:02:40 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:10:32.608 06:02:40 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:32.608 06:02:40 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.867 [2024-05-13 06:02:40.996732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.867 [2024-05-13 06:02:40.996788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.867 [2024-05-13 06:02:40.996809] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa680 00:10:32.867 [2024-05-13 06:02:40.996815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.867 [2024-05-13 06:02:40.996877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.867 [2024-05-13 06:02:40.996890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.867 [2024-05-13 06:02:40.996904] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:32.867 [2024-05-13 06:02:40.996910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.867 pt2 00:10:32.867 06:02:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:10:32.867 06:02:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:32.867 06:02:41 -- bdev/bdev_raid.sh@498 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:32.867 [2024-05-13 06:02:41.172751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:32.867 [2024-05-13 06:02:41.172782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.867 [2024-05-13 06:02:41.172797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x82abfa900 00:10:32.867 [2024-05-13 06:02:41.172802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.867 [2024-05-13 06:02:41.172870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.867 [2024-05-13 06:02:41.172879] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:32.867 [2024-05-13 06:02:41.172891] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:32.867 [2024-05-13 06:02:41.172896] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:32.867 [2024-05-13 06:02:41.172913] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x82abfa180 00:10:32.867 [2024-05-13 06:02:41.172916] bdev_raid.c:1586:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.867 [2024-05-13 06:02:41.172930] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x82ac5ce20 00:10:32.867 [2024-05-13 06:02:41.172958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x82abfa180 00:10:32.867 [2024-05-13 06:02:41.172961] bdev_raid.c:1616:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x82abfa180 00:10:32.867 [2024-05-13 06:02:41.172974] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.867 pt3 00:10:33.126 06:02:41 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:10:33.126 06:02:41 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@127 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:33.127 "name": "raid_bdev1", 00:10:33.127 "uuid": "6241c6a2-10ee-11ef-ba60-3508ead7bdda", 00:10:33.127 "strip_size_kb": 0, 00:10:33.127 "state": "online", 00:10:33.127 "raid_level": "raid1", 00:10:33.127 "superblock": true, 00:10:33.127 "num_base_bdevs": 4, 00:10:33.127 "num_base_bdevs_discovered": 3, 00:10:33.127 "num_base_bdevs_operational": 3, 00:10:33.127 "base_bdevs_list": [ 00:10:33.127 { 00:10:33.127 "name": null, 00:10:33.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.127 "is_configured": false, 00:10:33.127 "data_offset": 2048, 00:10:33.127 "data_size": 63488 00:10:33.127 }, 00:10:33.127 { 00:10:33.127 "name": "pt2", 00:10:33.127 "uuid": "017b7fa5-ebfe-9853-9b24-0b1818d28092", 00:10:33.127 "is_configured": true, 00:10:33.127 "data_offset": 2048, 00:10:33.127 "data_size": 63488 00:10:33.127 }, 00:10:33.127 { 00:10:33.127 "name": "pt3", 00:10:33.127 "uuid": "4801281b-a1db-825f-9361-47aee3df8b79", 00:10:33.127 "is_configured": true, 00:10:33.127 "data_offset": 2048, 00:10:33.127 "data_size": 63488 00:10:33.127 }, 00:10:33.127 { 00:10:33.127 "name": "pt4", 00:10:33.127 "uuid": "992f2c27-2430-8a5f-8d85-d8384c216431", 00:10:33.127 "is_configured": true, 00:10:33.127 "data_offset": 2048, 00:10:33.127 "data_size": 63488 00:10:33.127 } 00:10:33.127 ] 00:10:33.127 }' 00:10:33.127 06:02:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:33.127 06:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:33.386 06:02:41 -- bdev/bdev_raid.sh@506 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:33.386 06:02:41 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:10:33.644 [2024-05-13 06:02:41.780841] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.644 06:02:41 -- bdev/bdev_raid.sh@506 -- # '[' 6241c6a2-10ee-11ef-ba60-3508ead7bdda '!=' 6241c6a2-10ee-11ef-ba60-3508ead7bdda ']' 00:10:33.644 06:02:41 -- bdev/bdev_raid.sh@511 -- # killprocess 53435 00:10:33.644 06:02:41 -- common/autotest_common.sh@926 -- # '[' -z 53435 ']' 00:10:33.644 06:02:41 -- common/autotest_common.sh@930 -- # kill -0 53435 00:10:33.644 06:02:41 -- common/autotest_common.sh@931 -- # uname 00:10:33.644 06:02:41 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:33.644 06:02:41 -- common/autotest_common.sh@934 -- # ps -c -o command 53435 00:10:33.644 06:02:41 -- common/autotest_common.sh@934 -- # tail -1 00:10:33.644 06:02:41 -- common/autotest_common.sh@934 -- # process_name=bdev_svc 00:10:33.644 06:02:41 -- common/autotest_common.sh@936 -- # '[' bdev_svc = sudo ']' 00:10:33.644 killing process with pid 53435 00:10:33.644 06:02:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53435' 00:10:33.644 06:02:41 -- common/autotest_common.sh@945 -- # kill 53435 00:10:33.644 [2024-05-13 06:02:41.809870] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.644 [2024-05-13 06:02:41.809899] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.644 [2024-05-13 06:02:41.809912] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.644 [2024-05-13 06:02:41.809916] bdev_raid.c: 352:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x82abfa180 name raid_bdev1, state offline 00:10:33.644 06:02:41 -- common/autotest_common.sh@950 -- # wait 53435 00:10:33.644 [2024-05-13 06:02:41.828405] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.903 06:02:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:33.903 00:10:33.903 real 0m13.474s 00:10:33.903 user 0m24.061s 00:10:33.903 sys 0m2.180s 00:10:33.903 06:02:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.903 06:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 ************************************ 00:10:33.903 END TEST raid_superblock_test 00:10:33.903 ************************************ 00:10:33.903 06:02:42 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:10:33.903 06:02:42 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:10:33.903 06:02:42 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:10:33.903 00:10:33.903 real 3m30.922s 00:10:33.903 user 5m58.289s 00:10:33.903 sys 0m41.654s 00:10:33.903 06:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.903 06:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 ************************************ 00:10:33.903 END TEST bdev_raid 00:10:33.903 ************************************ 00:10:33.903 06:02:42 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:10:33.903 06:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:33.903 06:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.903 06:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:33.903 ************************************ 00:10:33.903 START TEST bdevperf_config 00:10:33.903 ************************************ 00:10:33.903 06:02:42 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:10:34.162 * Looking for test storage... 00:10:34.162 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:10:34.162 06:02:42 -- bdevperf/common.sh@5 -- # bdevperf=/usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@12 -- # jsonconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@13 -- # testconf=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:10:34.162 06:02:42 -- bdevperf/common.sh@8 -- # local job_section=global 00:10:34.162 06:02:42 -- bdevperf/common.sh@9 -- # local rw=read 00:10:34.162 06:02:42 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:34.162 06:02:42 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:10:34.162 06:02:42 -- bdevperf/common.sh@13 -- # cat 00:10:34.162 00:10:34.162 06:02:42 -- bdevperf/common.sh@18 -- # job='[global]' 00:10:34.162 06:02:42 -- bdevperf/common.sh@19 -- # echo 00:10:34.162 06:02:42 -- bdevperf/common.sh@20 -- # cat 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@18 -- # create_job job0 00:10:34.162 06:02:42 -- bdevperf/common.sh@8 -- # local job_section=job0 00:10:34.162 06:02:42 -- bdevperf/common.sh@9 -- # local rw= 00:10:34.162 06:02:42 -- bdevperf/common.sh@10 -- # local filename= 00:10:34.162 06:02:42 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:10:34.162 06:02:42 -- bdevperf/common.sh@18 -- # job='[job0]' 00:10:34.162 00:10:34.162 06:02:42 -- bdevperf/common.sh@19 -- # echo 00:10:34.162 06:02:42 -- bdevperf/common.sh@20 -- # cat 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@19 -- # create_job job1 00:10:34.162 06:02:42 -- bdevperf/common.sh@8 -- # local job_section=job1 00:10:34.162 06:02:42 -- bdevperf/common.sh@9 -- # local rw= 00:10:34.162 06:02:42 -- bdevperf/common.sh@10 -- # local filename= 00:10:34.162 06:02:42 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:10:34.162 06:02:42 -- bdevperf/common.sh@18 -- # job='[job1]' 00:10:34.162 00:10:34.162 06:02:42 -- bdevperf/common.sh@19 -- # echo 00:10:34.162 06:02:42 -- bdevperf/common.sh@20 -- # cat 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@20 -- # create_job job2 00:10:34.162 06:02:42 -- bdevperf/common.sh@8 -- # local job_section=job2 00:10:34.162 06:02:42 -- bdevperf/common.sh@9 -- # local rw= 00:10:34.162 06:02:42 -- bdevperf/common.sh@10 -- # local filename= 00:10:34.162 06:02:42 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:10:34.162 06:02:42 -- bdevperf/common.sh@18 -- # job='[job2]' 00:10:34.162 00:10:34.162 06:02:42 -- bdevperf/common.sh@19 -- # echo 00:10:34.162 06:02:42 -- bdevperf/common.sh@20 -- # cat 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@21 -- # create_job job3 00:10:34.162 06:02:42 -- bdevperf/common.sh@8 -- # local job_section=job3 00:10:34.162 06:02:42 -- bdevperf/common.sh@9 -- # local rw= 00:10:34.162 06:02:42 -- bdevperf/common.sh@10 -- # local filename= 00:10:34.162 06:02:42 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:10:34.162 06:02:42 -- bdevperf/common.sh@18 -- # job='[job3]' 00:10:34.162 00:10:34.162 06:02:42 -- bdevperf/common.sh@19 -- # echo 00:10:34.162 06:02:42 -- bdevperf/common.sh@20 -- # cat 00:10:34.162 06:02:42 -- bdevperf/test_config.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:36.696 06:02:44 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-05-13 06:02:42.297077] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.696 [2024-05-13 06:02:42.297457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:36.696 Using job config with 4 jobs 00:10:36.696 EAL: TSC is not safe to use in SMP mode 00:10:36.696 EAL: TSC is not invariant 00:10:36.696 [2024-05-13 06:02:42.714130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.696 [2024-05-13 06:02:42.798341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.696 cpumask for '\''job0'\'' is too big 00:10:36.696 cpumask for '\''job1'\'' is too big 00:10:36.696 cpumask for '\''job2'\'' is too big 00:10:36.696 cpumask for '\''job3'\'' is too big 00:10:36.696 Running I/O for 2 seconds... 00:10:36.696 00:10:36.696 Latency(us) 00:10:36.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448815.53 438.30 0.00 0.00 570.22 149.95 1128.16 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448801.84 438.28 0.00 0.00 570.16 138.34 963.93 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448848.80 438.33 0.00 0.00 570.01 142.80 806.85 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448829.96 438.31 0.00 0.00 569.93 132.99 660.47 00:10:36.696 =================================================================================================================== 00:10:36.696 Total : 1795296.14 1753.22 0.00 0.00 570.08 132.99 1128.16' 00:10:36.696 06:02:44 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-05-13 06:02:42.297077] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.696 [2024-05-13 06:02:42.297457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:36.696 Using job config with 4 jobs 00:10:36.696 EAL: TSC is not safe to use in SMP mode 00:10:36.696 EAL: TSC is not invariant 00:10:36.696 [2024-05-13 06:02:42.714130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.696 [2024-05-13 06:02:42.798341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.696 cpumask for '\''job0'\'' is too big 00:10:36.696 cpumask for '\''job1'\'' is too big 00:10:36.696 cpumask for '\''job2'\'' is too big 00:10:36.696 cpumask for '\''job3'\'' is too big 00:10:36.696 Running I/O for 2 seconds... 00:10:36.696 00:10:36.696 Latency(us) 00:10:36.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448815.53 438.30 0.00 0.00 570.22 149.95 1128.16 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448801.84 438.28 0.00 0.00 570.16 138.34 963.93 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448848.80 438.33 0.00 0.00 570.01 142.80 806.85 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448829.96 438.31 0.00 0.00 569.93 132.99 660.47 00:10:36.696 =================================================================================================================== 00:10:36.696 Total : 1795296.14 1753.22 0.00 0.00 570.08 132.99 1128.16' 00:10:36.696 06:02:44 -- bdevperf/common.sh@32 -- # echo '[2024-05-13 06:02:42.297077] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.696 [2024-05-13 06:02:42.297457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:36.696 Using job config with 4 jobs 00:10:36.696 EAL: TSC is not safe to use in SMP mode 00:10:36.696 EAL: TSC is not invariant 00:10:36.696 [2024-05-13 06:02:42.714130] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.696 [2024-05-13 06:02:42.798341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.696 cpumask for '\''job0'\'' is too big 00:10:36.696 cpumask for '\''job1'\'' is too big 00:10:36.696 cpumask for '\''job2'\'' is too big 00:10:36.696 cpumask for '\''job3'\'' is too big 00:10:36.696 Running I/O for 2 seconds... 00:10:36.696 00:10:36.696 Latency(us) 00:10:36.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448815.53 438.30 0.00 0.00 570.22 149.95 1128.16 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448801.84 438.28 0.00 0.00 570.16 138.34 963.93 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448848.80 438.33 0.00 0.00 570.01 142.80 806.85 00:10:36.696 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:36.696 Malloc0 : 2.00 448829.96 438.31 0.00 0.00 569.93 132.99 660.47 00:10:36.696 =================================================================================================================== 00:10:36.696 Total : 1795296.14 1753.22 0.00 0.00 570.08 132.99 1128.16' 00:10:36.696 06:02:44 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:10:36.696 06:02:44 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:10:36.696 06:02:44 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:10:36.696 06:02:44 -- bdevperf/test_config.sh@25 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:36.697 [2024-05-13 06:02:44.995102] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:36.697 [2024-05-13 06:02:44.995456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:37.265 EAL: TSC is not safe to use in SMP mode 00:10:37.265 EAL: TSC is not invariant 00:10:37.265 [2024-05-13 06:02:45.411675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.265 [2024-05-13 06:02:45.498845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.265 cpumask for 'job0' is too big 00:10:37.265 cpumask for 'job1' is too big 00:10:37.265 cpumask for 'job2' is too big 00:10:37.265 cpumask for 'job3' is too big 00:10:39.803 06:02:47 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:10:39.803 Running I/O for 2 seconds... 00:10:39.803 00:10:39.803 Latency(us) 00:10:39.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.803 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:39.803 Malloc0 : 2.00 448351.04 437.84 0.00 0.00 570.82 147.27 1135.30 00:10:39.803 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:39.803 Malloc0 : 2.00 448365.82 437.86 0.00 0.00 570.72 138.34 978.21 00:10:39.803 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:39.803 Malloc0 : 2.00 448353.05 437.84 0.00 0.00 570.64 145.48 813.99 00:10:39.803 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:10:39.803 Malloc0 : 2.00 448333.56 437.83 0.00 0.00 570.57 146.37 653.33 00:10:39.803 =================================================================================================================== 00:10:39.803 Total : 1793403.48 1751.37 0.00 0.00 570.69 138.34 1135.30' 00:10:39.803 06:02:47 -- bdevperf/test_config.sh@27 -- # cleanup 00:10:39.803 06:02:47 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:39.803 06:02:47 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:10:39.803 06:02:47 -- bdevperf/common.sh@8 -- # local job_section=job0 00:10:39.803 06:02:47 -- bdevperf/common.sh@9 -- # local rw=write 00:10:39.803 06:02:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:39.803 06:02:47 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:10:39.803 06:02:47 -- bdevperf/common.sh@18 -- # job='[job0]' 00:10:39.803 00:10:39.803 06:02:47 -- bdevperf/common.sh@19 -- # echo 00:10:39.803 06:02:47 -- bdevperf/common.sh@20 -- # cat 00:10:39.803 06:02:47 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:10:39.803 06:02:47 -- bdevperf/common.sh@8 -- # local job_section=job1 00:10:39.803 06:02:47 -- bdevperf/common.sh@9 -- # local rw=write 00:10:39.803 06:02:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:39.803 06:02:47 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:10:39.803 06:02:47 -- bdevperf/common.sh@18 -- # job='[job1]' 00:10:39.803 00:10:39.803 06:02:47 -- bdevperf/common.sh@19 -- # echo 00:10:39.803 06:02:47 -- bdevperf/common.sh@20 -- # cat 00:10:39.803 06:02:47 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:10:39.803 06:02:47 -- bdevperf/common.sh@8 -- # local job_section=job2 00:10:39.803 06:02:47 -- bdevperf/common.sh@9 -- # local rw=write 00:10:39.803 06:02:47 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:10:39.803 06:02:47 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:10:39.803 06:02:47 -- bdevperf/common.sh@18 -- # job='[job2]' 00:10:39.803 00:10:39.803 06:02:47 -- bdevperf/common.sh@19 -- # echo 00:10:39.803 06:02:47 -- bdevperf/common.sh@20 -- # cat 00:10:39.803 06:02:47 -- bdevperf/test_config.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:42.340 06:02:50 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-05-13 06:02:47.705538] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:42.340 [2024-05-13 06:02:47.705905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:42.340 Using job config with 3 jobs 00:10:42.340 EAL: TSC is not safe to use in SMP mode 00:10:42.340 EAL: TSC is not invariant 00:10:42.340 [2024-05-13 06:02:48.123553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.340 [2024-05-13 06:02:48.208276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.340 cpumask for '\''job0'\'' is too big 00:10:42.340 cpumask for '\''job1'\'' is too big 00:10:42.340 cpumask for '\''job2'\'' is too big 00:10:42.340 Running I/O for 2 seconds... 00:10:42.340 00:10:42.340 Latency(us) 00:10:42.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.340 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.340 Malloc0 : 2.00 544288.57 531.53 0.00 0.00 470.17 174.94 821.13 00:10:42.340 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.340 Malloc0 : 2.00 544271.16 531.51 0.00 0.00 470.09 149.05 692.60 00:10:42.340 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.340 Malloc0 : 2.00 544332.45 531.57 0.00 0.00 469.97 52.88 549.80 00:10:42.340 =================================================================================================================== 00:10:42.341 Total : 1632892.18 1594.62 0.00 0.00 470.08 52.88 821.13' 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-05-13 06:02:47.705538] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:42.341 [2024-05-13 06:02:47.705905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:42.341 Using job config with 3 jobs 00:10:42.341 EAL: TSC is not safe to use in SMP mode 00:10:42.341 EAL: TSC is not invariant 00:10:42.341 [2024-05-13 06:02:48.123553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.341 [2024-05-13 06:02:48.208276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.341 cpumask for '\''job0'\'' is too big 00:10:42.341 cpumask for '\''job1'\'' is too big 00:10:42.341 cpumask for '\''job2'\'' is too big 00:10:42.341 Running I/O for 2 seconds... 00:10:42.341 00:10:42.341 Latency(us) 00:10:42.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.341 Malloc0 : 2.00 544288.57 531.53 0.00 0.00 470.17 174.94 821.13 00:10:42.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.341 Malloc0 : 2.00 544271.16 531.51 0.00 0.00 470.09 149.05 692.60 00:10:42.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.341 Malloc0 : 2.00 544332.45 531.57 0.00 0.00 469.97 52.88 549.80 00:10:42.341 =================================================================================================================== 00:10:42.341 Total : 1632892.18 1594.62 0.00 0.00 470.08 52.88 821.13' 00:10:42.341 06:02:50 -- bdevperf/common.sh@32 -- # echo '[2024-05-13 06:02:47.705538] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:42.341 [2024-05-13 06:02:47.705905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:42.341 Using job config with 3 jobs 00:10:42.341 EAL: TSC is not safe to use in SMP mode 00:10:42.341 EAL: TSC is not invariant 00:10:42.341 [2024-05-13 06:02:48.123553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.341 [2024-05-13 06:02:48.208276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.341 cpumask for '\''job0'\'' is too big 00:10:42.341 cpumask for '\''job1'\'' is too big 00:10:42.341 cpumask for '\''job2'\'' is too big 00:10:42.341 Running I/O for 2 seconds... 00:10:42.341 00:10:42.341 Latency(us) 00:10:42.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.341 Malloc0 : 2.00 544288.57 531.53 0.00 0.00 470.17 174.94 821.13 00:10:42.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.341 Malloc0 : 2.00 544271.16 531.51 0.00 0.00 470.09 149.05 692.60 00:10:42.341 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:10:42.341 Malloc0 : 2.00 544332.45 531.57 0.00 0.00 469.97 52.88 549.80 00:10:42.341 =================================================================================================================== 00:10:42.341 Total : 1632892.18 1594.62 0.00 0.00 470.08 52.88 821.13' 00:10:42.341 06:02:50 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:10:42.341 06:02:50 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@35 -- # cleanup 00:10:42.341 06:02:50 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:10:42.341 06:02:50 -- bdevperf/common.sh@8 -- # local job_section=global 00:10:42.341 06:02:50 -- bdevperf/common.sh@9 -- # local rw=rw 00:10:42.341 06:02:50 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:10:42.341 06:02:50 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:10:42.341 06:02:50 -- bdevperf/common.sh@13 -- # cat 00:10:42.341 00:10:42.341 06:02:50 -- bdevperf/common.sh@18 -- # job='[global]' 00:10:42.341 06:02:50 -- bdevperf/common.sh@19 -- # echo 00:10:42.341 06:02:50 -- bdevperf/common.sh@20 -- # cat 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@38 -- # create_job job0 00:10:42.341 06:02:50 -- bdevperf/common.sh@8 -- # local job_section=job0 00:10:42.341 06:02:50 -- bdevperf/common.sh@9 -- # local rw= 00:10:42.341 06:02:50 -- bdevperf/common.sh@10 -- # local filename= 00:10:42.341 06:02:50 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:10:42.341 06:02:50 -- bdevperf/common.sh@18 -- # job='[job0]' 00:10:42.341 00:10:42.341 06:02:50 -- bdevperf/common.sh@19 -- # echo 00:10:42.341 06:02:50 -- bdevperf/common.sh@20 -- # cat 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@39 -- # create_job job1 00:10:42.341 06:02:50 -- bdevperf/common.sh@8 -- # local job_section=job1 00:10:42.341 06:02:50 -- bdevperf/common.sh@9 -- # local rw= 00:10:42.341 06:02:50 -- bdevperf/common.sh@10 -- # local filename= 00:10:42.341 06:02:50 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:10:42.341 06:02:50 -- bdevperf/common.sh@18 -- # job='[job1]' 00:10:42.341 00:10:42.341 06:02:50 -- bdevperf/common.sh@19 -- # echo 00:10:42.341 06:02:50 -- bdevperf/common.sh@20 -- # cat 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@40 -- # create_job job2 00:10:42.341 06:02:50 -- bdevperf/common.sh@8 -- # local job_section=job2 00:10:42.341 06:02:50 -- bdevperf/common.sh@9 -- # local rw= 00:10:42.341 06:02:50 -- bdevperf/common.sh@10 -- # local filename= 00:10:42.341 06:02:50 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:10:42.341 06:02:50 -- bdevperf/common.sh@18 -- # job='[job2]' 00:10:42.341 00:10:42.341 06:02:50 -- bdevperf/common.sh@19 -- # echo 00:10:42.341 06:02:50 -- bdevperf/common.sh@20 -- # cat 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@41 -- # create_job job3 00:10:42.341 06:02:50 -- bdevperf/common.sh@8 -- # local job_section=job3 00:10:42.341 06:02:50 -- bdevperf/common.sh@9 -- # local rw= 00:10:42.341 06:02:50 -- bdevperf/common.sh@10 -- # local filename= 00:10:42.341 06:02:50 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:10:42.341 06:02:50 -- bdevperf/common.sh@18 -- # job='[job3]' 00:10:42.341 00:10:42.341 06:02:50 -- bdevperf/common.sh@19 -- # echo 00:10:42.341 06:02:50 -- bdevperf/common.sh@20 -- # cat 00:10:42.341 06:02:50 -- bdevperf/test_config.sh@42 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:44.880 06:02:53 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-05-13 06:02:50.437578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:44.880 [2024-05-13 06:02:50.437917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:44.880 Using job config with 4 jobs 00:10:44.880 EAL: TSC is not safe to use in SMP mode 00:10:44.880 EAL: TSC is not invariant 00:10:44.880 [2024-05-13 06:02:50.852381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.880 [2024-05-13 06:02:50.924727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.880 cpumask for '\''job0'\'' is too big 00:10:44.880 cpumask for '\''job1'\'' is too big 00:10:44.880 cpumask for '\''job2'\'' is too big 00:10:44.880 cpumask for '\''job3'\'' is too big 00:10:44.880 Running I/O for 2 seconds... 00:10:44.880 00:10:44.880 Latency(us) 00:10:44.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.880 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc0 : 2.00 202638.05 197.89 0.00 0.00 1263.09 376.65 2527.65 00:10:44.880 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc1 : 2.00 202626.31 197.88 0.00 0.00 1263.08 371.29 2499.08 00:10:44.880 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc0 : 2.00 202618.93 197.87 0.00 0.00 1262.78 364.15 2113.51 00:10:44.880 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc1 : 2.00 202610.63 197.86 0.00 0.00 1262.67 346.30 2099.23 00:10:44.880 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc0 : 2.00 202600.11 197.85 0.00 0.00 1262.41 364.15 1706.52 00:10:44.880 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc1 : 2.00 202657.50 197.91 0.00 0.00 1261.93 344.52 1677.96 00:10:44.880 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc0 : 2.00 202647.69 197.90 0.00 0.00 1261.67 358.80 1478.03 00:10:44.880 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc1 : 2.00 202637.48 197.89 0.00 0.00 1261.54 282.04 1485.17 00:10:44.880 =================================================================================================================== 00:10:44.880 Total : 1621036.70 1583.04 0.00 0.00 1262.40 282.04 2527.65' 00:10:44.880 06:02:53 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-05-13 06:02:50.437578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:44.880 [2024-05-13 06:02:50.437917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:44.880 Using job config with 4 jobs 00:10:44.880 EAL: TSC is not safe to use in SMP mode 00:10:44.880 EAL: TSC is not invariant 00:10:44.880 [2024-05-13 06:02:50.852381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.880 [2024-05-13 06:02:50.924727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.880 cpumask for '\''job0'\'' is too big 00:10:44.880 cpumask for '\''job1'\'' is too big 00:10:44.880 cpumask for '\''job2'\'' is too big 00:10:44.880 cpumask for '\''job3'\'' is too big 00:10:44.880 Running I/O for 2 seconds... 00:10:44.880 00:10:44.880 Latency(us) 00:10:44.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.880 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.880 Malloc0 : 2.00 202638.05 197.89 0.00 0.00 1263.09 376.65 2527.65 00:10:44.880 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202626.31 197.88 0.00 0.00 1263.08 371.29 2499.08 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202618.93 197.87 0.00 0.00 1262.78 364.15 2113.51 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202610.63 197.86 0.00 0.00 1262.67 346.30 2099.23 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202600.11 197.85 0.00 0.00 1262.41 364.15 1706.52 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202657.50 197.91 0.00 0.00 1261.93 344.52 1677.96 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202647.69 197.90 0.00 0.00 1261.67 358.80 1478.03 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202637.48 197.89 0.00 0.00 1261.54 282.04 1485.17 00:10:44.881 =================================================================================================================== 00:10:44.881 Total : 1621036.70 1583.04 0.00 0.00 1262.40 282.04 2527.65' 00:10:44.881 06:02:53 -- bdevperf/common.sh@32 -- # echo '[2024-05-13 06:02:50.437578] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:44.881 [2024-05-13 06:02:50.437917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:44.881 Using job config with 4 jobs 00:10:44.881 EAL: TSC is not safe to use in SMP mode 00:10:44.881 EAL: TSC is not invariant 00:10:44.881 [2024-05-13 06:02:50.852381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.881 [2024-05-13 06:02:50.924727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.881 cpumask for '\''job0'\'' is too big 00:10:44.881 cpumask for '\''job1'\'' is too big 00:10:44.881 cpumask for '\''job2'\'' is too big 00:10:44.881 cpumask for '\''job3'\'' is too big 00:10:44.881 Running I/O for 2 seconds... 00:10:44.881 00:10:44.881 Latency(us) 00:10:44.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202638.05 197.89 0.00 0.00 1263.09 376.65 2527.65 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202626.31 197.88 0.00 0.00 1263.08 371.29 2499.08 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202618.93 197.87 0.00 0.00 1262.78 364.15 2113.51 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202610.63 197.86 0.00 0.00 1262.67 346.30 2099.23 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202600.11 197.85 0.00 0.00 1262.41 364.15 1706.52 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202657.50 197.91 0.00 0.00 1261.93 344.52 1677.96 00:10:44.881 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc0 : 2.00 202647.69 197.90 0.00 0.00 1261.67 358.80 1478.03 00:10:44.881 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:10:44.881 Malloc1 : 2.00 202637.48 197.89 0.00 0.00 1261.54 282.04 1485.17 00:10:44.881 =================================================================================================================== 00:10:44.881 Total : 1621036.70 1583.04 0.00 0.00 1262.40 282.04 2527.65' 00:10:44.881 06:02:53 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:10:44.881 06:02:53 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:10:44.881 06:02:53 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:10:44.881 06:02:53 -- bdevperf/test_config.sh@44 -- # cleanup 00:10:44.881 06:02:53 -- bdevperf/common.sh@36 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:10:44.881 06:02:53 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:44.881 00:10:44.881 real 0m11.042s 00:10:44.881 user 0m9.143s 00:10:44.881 sys 0m1.974s 00:10:44.881 06:02:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.881 06:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:44.881 ************************************ 00:10:44.881 END TEST bdevperf_config 00:10:44.881 ************************************ 00:10:44.881 06:02:53 -- spdk/autotest.sh@198 -- # uname -s 00:10:44.881 06:02:53 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:10:44.881 06:02:53 -- spdk/autotest.sh@204 -- # uname -s 00:10:44.881 06:02:53 -- spdk/autotest.sh@204 -- # [[ FreeBSD == Linux ]] 00:10:44.881 06:02:53 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:10:44.881 06:02:53 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:44.881 06:02:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:44.881 06:02:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.881 06:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:44.881 ************************************ 00:10:44.881 START TEST blockdev_nvme 00:10:44.881 ************************************ 00:10:44.881 06:02:53 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:10:45.147 * Looking for test storage... 00:10:45.147 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/bdev 00:10:45.147 06:02:53 -- bdev/blockdev.sh@10 -- # source /usr/home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:45.147 06:02:53 -- bdev/nbd_common.sh@6 -- # set -e 00:10:45.147 06:02:53 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:45.147 06:02:53 -- bdev/blockdev.sh@13 -- # conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:45.147 06:02:53 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:45.147 06:02:53 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:45.147 06:02:53 -- bdev/blockdev.sh@18 -- # : 00:10:45.147 06:02:53 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:10:45.147 06:02:53 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:10:45.147 06:02:53 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:10:45.147 06:02:53 -- bdev/blockdev.sh@672 -- # uname -s 00:10:45.147 06:02:53 -- bdev/blockdev.sh@672 -- # '[' FreeBSD = Linux ']' 00:10:45.147 06:02:53 -- bdev/blockdev.sh@677 -- # PRE_RESERVED_MEM=2048 00:10:45.147 06:02:53 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:10:45.147 06:02:53 -- bdev/blockdev.sh@681 -- # crypto_device= 00:10:45.147 06:02:53 -- bdev/blockdev.sh@682 -- # dek= 00:10:45.147 06:02:53 -- bdev/blockdev.sh@683 -- # env_ctx= 00:10:45.147 06:02:53 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:10:45.147 06:02:53 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:10:45.147 06:02:53 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:10:45.147 06:02:53 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:10:45.147 06:02:53 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:10:45.147 06:02:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=53985 00:10:45.147 06:02:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:45.147 06:02:53 -- bdev/blockdev.sh@47 -- # waitforlisten 53985 00:10:45.147 06:02:53 -- common/autotest_common.sh@819 -- # '[' -z 53985 ']' 00:10:45.147 06:02:53 -- bdev/blockdev.sh@44 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:45.147 06:02:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.147 06:02:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:45.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.147 06:02:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.147 06:02:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:45.147 06:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:45.147 [2024-05-13 06:02:53.392167] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:45.147 [2024-05-13 06:02:53.392526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:45.727 EAL: TSC is not safe to use in SMP mode 00:10:45.727 EAL: TSC is not invariant 00:10:45.727 [2024-05-13 06:02:53.812863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.727 [2024-05-13 06:02:53.899232] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.727 [2024-05-13 06:02:53.899315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.987 06:02:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:45.987 06:02:54 -- common/autotest_common.sh@852 -- # return 0 00:10:45.987 06:02:54 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:10:45.987 06:02:54 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:10:45.987 06:02:54 -- bdev/blockdev.sh@79 -- # local json 00:10:45.987 06:02:54 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:10:45.987 06:02:54 -- bdev/blockdev.sh@80 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.246 06:02:54 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:10:46.246 06:02:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.246 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.246 [2024-05-13 06:02:54.351480] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:46.246 06:02:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.246 06:02:54 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:10:46.246 06:02:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.246 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.246 06:02:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.246 06:02:54 -- bdev/blockdev.sh@738 -- # cat 00:10:46.247 06:02:54 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:10:46.247 06:02:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.247 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 06:02:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.247 06:02:54 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:10:46.247 06:02:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.247 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 06:02:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.247 06:02:54 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:46.247 06:02:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.247 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 06:02:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.247 06:02:54 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:10:46.247 06:02:54 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:10:46.247 06:02:54 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:10:46.247 06:02:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:46.247 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.247 06:02:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:46.247 06:02:54 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:10:46.247 06:02:54 -- bdev/blockdev.sh@747 -- # jq -r .name 00:10:46.247 06:02:54 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "704ad493-10ee-11ef-ba60-3508ead7bdda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "704ad493-10ee-11ef-ba60-3508ead7bdda",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:46.247 06:02:54 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:10:46.247 06:02:54 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:10:46.247 06:02:54 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:10:46.247 06:02:54 -- bdev/blockdev.sh@752 -- # killprocess 53985 00:10:46.247 06:02:54 -- common/autotest_common.sh@926 -- # '[' -z 53985 ']' 00:10:46.247 06:02:54 -- common/autotest_common.sh@930 -- # kill -0 53985 00:10:46.247 06:02:54 -- common/autotest_common.sh@931 -- # uname 00:10:46.247 06:02:54 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:46.247 06:02:54 -- common/autotest_common.sh@934 -- # ps -c -o command 53985 00:10:46.247 06:02:54 -- common/autotest_common.sh@934 -- # tail -1 00:10:46.247 06:02:54 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:10:46.247 06:02:54 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:10:46.247 killing process with pid 53985 00:10:46.247 06:02:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53985' 00:10:46.247 06:02:54 -- common/autotest_common.sh@945 -- # kill 53985 00:10:46.247 06:02:54 -- common/autotest_common.sh@950 -- # wait 53985 00:10:46.507 06:02:54 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:46.507 06:02:54 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:46.507 06:02:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:46.507 06:02:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.507 06:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:46.507 ************************************ 00:10:46.507 START TEST bdev_hello_world 00:10:46.507 ************************************ 00:10:46.507 06:02:54 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:46.507 [2024-05-13 06:02:54.755916] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:46.507 [2024-05-13 06:02:54.756271] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:47.076 EAL: TSC is not safe to use in SMP mode 00:10:47.076 EAL: TSC is not invariant 00:10:47.076 [2024-05-13 06:02:55.173098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.076 [2024-05-13 06:02:55.257052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.076 [2024-05-13 06:02:55.312530] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:47.076 [2024-05-13 06:02:55.381956] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:47.076 [2024-05-13 06:02:55.381992] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:47.076 [2024-05-13 06:02:55.382001] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:47.076 [2024-05-13 06:02:55.382561] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:47.076 [2024-05-13 06:02:55.382811] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:47.076 [2024-05-13 06:02:55.382835] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:47.076 [2024-05-13 06:02:55.383036] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:47.076 00:10:47.076 [2024-05-13 06:02:55.383074] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:47.336 00:10:47.336 real 0m0.775s 00:10:47.336 user 0m0.295s 00:10:47.336 sys 0m0.477s 00:10:47.336 06:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.336 06:02:55 -- common/autotest_common.sh@10 -- # set +x 00:10:47.336 ************************************ 00:10:47.336 END TEST bdev_hello_world 00:10:47.336 ************************************ 00:10:47.336 06:02:55 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:10:47.336 06:02:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:47.336 06:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.336 06:02:55 -- common/autotest_common.sh@10 -- # set +x 00:10:47.336 ************************************ 00:10:47.336 START TEST bdev_bounds 00:10:47.336 ************************************ 00:10:47.336 06:02:55 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:10:47.336 06:02:55 -- bdev/blockdev.sh@288 -- # bdevio_pid=54044 00:10:47.336 06:02:55 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:47.336 06:02:55 -- bdev/blockdev.sh@287 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:47.336 Process bdevio pid: 54044 00:10:47.336 06:02:55 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 54044' 00:10:47.336 06:02:55 -- bdev/blockdev.sh@291 -- # waitforlisten 54044 00:10:47.336 06:02:55 -- common/autotest_common.sh@819 -- # '[' -z 54044 ']' 00:10:47.336 06:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.336 06:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:47.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.336 06:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.336 06:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:47.336 06:02:55 -- common/autotest_common.sh@10 -- # set +x 00:10:47.336 [2024-05-13 06:02:55.591323] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:47.336 [2024-05-13 06:02:55.591617] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:47.905 EAL: TSC is not safe to use in SMP mode 00:10:47.905 EAL: TSC is not invariant 00:10:47.905 [2024-05-13 06:02:56.013677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:47.905 [2024-05-13 06:02:56.100482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.905 [2024-05-13 06:02:56.100304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.905 [2024-05-13 06:02:56.100483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.905 [2024-05-13 06:02:56.155231] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:48.475 06:02:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:48.475 06:02:56 -- common/autotest_common.sh@852 -- # return 0 00:10:48.475 06:02:56 -- bdev/blockdev.sh@292 -- # /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:48.475 I/O targets: 00:10:48.475 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:48.475 00:10:48.475 00:10:48.475 CUnit - A unit testing framework for C - Version 2.1-3 00:10:48.475 http://cunit.sourceforge.net/ 00:10:48.475 00:10:48.475 00:10:48.475 Suite: bdevio tests on: Nvme0n1 00:10:48.475 Test: blockdev write read block ...passed 00:10:48.475 Test: blockdev write zeroes read block ...passed 00:10:48.475 Test: blockdev write zeroes read no split ...passed 00:10:48.475 Test: blockdev write zeroes read split ...passed 00:10:48.475 Test: blockdev write zeroes read split partial ...passed 00:10:48.475 Test: blockdev reset ...[2024-05-13 06:02:56.582623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:48.475 passed 00:10:48.475 Test: blockdev write read 8 blocks ...[2024-05-13 06:02:56.583608] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:48.475 passed 00:10:48.475 Test: blockdev write read size > 128k ...passed 00:10:48.475 Test: blockdev write read invalid size ...passed 00:10:48.475 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:48.475 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:48.475 Test: blockdev write read max offset ...passed 00:10:48.475 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:48.475 Test: blockdev writev readv 8 blocks ...passed 00:10:48.475 Test: blockdev writev readv 30 x 1block ...passed 00:10:48.475 Test: blockdev writev readv block ...passed 00:10:48.475 Test: blockdev writev readv size > 128k ...passed 00:10:48.475 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:48.475 Test: blockdev comparev and writev ...[2024-05-13 06:02:56.586887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x217947000 len:0x1000 00:10:48.475 [2024-05-13 06:02:56.586923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:48.475 passed 00:10:48.475 Test: blockdev nvme passthru rw ...passed 00:10:48.475 Test: blockdev nvme passthru vendor specific ...[2024-05-13 06:02:56.587255] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:48.475 passed 00:10:48.475 Test: blockdev nvme admin passthru ...[2024-05-13 06:02:56.587272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:48.475 passed 00:10:48.475 Test: blockdev copy ...passed 00:10:48.475 00:10:48.475 Run Summary: Type Total Ran Passed Failed Inactive 00:10:48.475 suites 1 1 n/a 0 0 00:10:48.475 tests 23 23 23 0 0 00:10:48.475 asserts 152 152 152 0 n/a 00:10:48.475 00:10:48.475 Elapsed time = 0.047 seconds 00:10:48.475 0 00:10:48.475 06:02:56 -- bdev/blockdev.sh@293 -- # killprocess 54044 00:10:48.475 06:02:56 -- common/autotest_common.sh@926 -- # '[' -z 54044 ']' 00:10:48.475 06:02:56 -- common/autotest_common.sh@930 -- # kill -0 54044 00:10:48.475 06:02:56 -- common/autotest_common.sh@931 -- # uname 00:10:48.475 06:02:56 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:10:48.475 06:02:56 -- common/autotest_common.sh@934 -- # tail -1 00:10:48.475 06:02:56 -- common/autotest_common.sh@934 -- # ps -c -o command 54044 00:10:48.475 06:02:56 -- common/autotest_common.sh@934 -- # process_name=bdevio 00:10:48.475 06:02:56 -- common/autotest_common.sh@936 -- # '[' bdevio = sudo ']' 00:10:48.475 killing process with pid 54044 00:10:48.475 06:02:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54044' 00:10:48.475 06:02:56 -- common/autotest_common.sh@945 -- # kill 54044 00:10:48.475 06:02:56 -- common/autotest_common.sh@950 -- # wait 54044 00:10:48.475 06:02:56 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:10:48.475 00:10:48.475 real 0m1.186s 00:10:48.475 user 0m2.228s 00:10:48.475 sys 0m0.562s 00:10:48.475 06:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.475 06:02:56 -- common/autotest_common.sh@10 -- # set +x 00:10:48.475 ************************************ 00:10:48.475 END TEST bdev_bounds 00:10:48.475 ************************************ 00:10:48.735 06:02:56 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:10:48.735 06:02:56 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:48.735 06:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.735 06:02:56 -- common/autotest_common.sh@10 -- # set +x 00:10:48.735 ************************************ 00:10:48.735 START TEST bdev_nbd 00:10:48.735 ************************************ 00:10:48.735 06:02:56 -- common/autotest_common.sh@1104 -- # nbd_function_test /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:10:48.735 06:02:56 -- bdev/blockdev.sh@298 -- # uname -s 00:10:48.735 06:02:56 -- bdev/blockdev.sh@298 -- # [[ FreeBSD == Linux ]] 00:10:48.735 06:02:56 -- bdev/blockdev.sh@298 -- # return 0 00:10:48.735 00:10:48.735 real 0m0.006s 00:10:48.735 user 0m0.004s 00:10:48.735 sys 0m0.001s 00:10:48.735 06:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.735 06:02:56 -- common/autotest_common.sh@10 -- # set +x 00:10:48.735 ************************************ 00:10:48.735 END TEST bdev_nbd 00:10:48.735 ************************************ 00:10:48.735 06:02:56 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:48.735 06:02:56 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:10:48.735 skipping fio tests on NVMe due to multi-ns failures. 00:10:48.735 06:02:56 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:48.735 06:02:56 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:48.735 06:02:56 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:48.735 06:02:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:48.735 06:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:48.735 06:02:56 -- common/autotest_common.sh@10 -- # set +x 00:10:48.735 ************************************ 00:10:48.735 START TEST bdev_verify 00:10:48.735 ************************************ 00:10:48.736 06:02:56 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:48.736 [2024-05-13 06:02:56.897455] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:48.736 [2024-05-13 06:02:56.897830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:10:48.995 EAL: TSC is not safe to use in SMP mode 00:10:48.995 EAL: TSC is not invariant 00:10:49.255 [2024-05-13 06:02:57.320845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:49.255 [2024-05-13 06:02:57.407239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.255 [2024-05-13 06:02:57.407239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.255 [2024-05-13 06:02:57.462179] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:10:49.255 Running I/O for 5 seconds... 00:10:54.524 00:10:54.524 Latency(us) 00:10:54.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.524 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:54.524 Verification LBA range: start 0x0 length 0xa0000 00:10:54.524 Nvme0n1 : 5.00 32197.04 125.77 0.00 0.00 3968.22 167.80 31988.28 00:10:54.524 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:54.524 Verification LBA range: start 0xa0000 length 0xa0000 00:10:54.524 Nvme0n1 : 5.00 33879.58 132.34 0.00 0.00 3771.16 207.07 32902.23 00:10:54.524 =================================================================================================================== 00:10:54.524 Total : 66076.62 258.11 0.00 0.00 3867.17 167.80 32902.23 00:11:26.602 00:11:26.602 real 0m35.261s 00:11:26.602 user 1m9.394s 00:11:26.602 sys 0m0.487s 00:11:26.602 06:03:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.602 06:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:26.602 ************************************ 00:11:26.602 END TEST bdev_verify 00:11:26.602 ************************************ 00:11:26.602 06:03:32 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:26.602 06:03:32 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:11:26.602 06:03:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.602 06:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:26.602 ************************************ 00:11:26.602 START TEST bdev_verify_big_io 00:11:26.602 ************************************ 00:11:26.602 06:03:32 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:11:26.602 [2024-05-13 06:03:32.218221] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:26.602 [2024-05-13 06:03:32.218585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:26.602 EAL: TSC is not safe to use in SMP mode 00:11:26.602 EAL: TSC is not invariant 00:11:26.602 [2024-05-13 06:03:32.635027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:26.602 [2024-05-13 06:03:32.707924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.602 [2024-05-13 06:03:32.707924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.602 [2024-05-13 06:03:32.763052] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:26.602 Running I/O for 5 seconds... 00:11:29.879 00:11:29.879 Latency(us) 00:11:29.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.879 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:11:29.879 Verification LBA range: start 0x0 length 0xa000 00:11:29.879 Nvme0n1 : 5.01 18166.85 1135.43 0.00 0.00 7003.15 78.10 28560.96 00:11:29.879 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:11:29.879 Verification LBA range: start 0xa000 length 0xa000 00:11:29.879 Nvme0n1 : 5.01 17993.11 1124.57 0.00 0.00 7070.86 64.26 29474.91 00:11:29.879 =================================================================================================================== 00:11:29.879 Total : 36159.96 2260.00 0.00 0.00 7036.85 64.26 29474.91 00:11:34.074 00:11:34.074 real 0m9.728s 00:11:34.074 user 0m18.378s 00:11:34.074 sys 0m0.481s 00:11:34.074 06:03:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.075 06:03:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.075 ************************************ 00:11:34.075 END TEST bdev_verify_big_io 00:11:34.075 ************************************ 00:11:34.075 06:03:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:34.075 06:03:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:34.075 06:03:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.075 06:03:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.075 ************************************ 00:11:34.075 START TEST bdev_write_zeroes 00:11:34.075 ************************************ 00:11:34.075 06:03:41 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:34.075 [2024-05-13 06:03:42.001058] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:34.075 [2024-05-13 06:03:42.001418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:34.333 EAL: TSC is not safe to use in SMP mode 00:11:34.333 EAL: TSC is not invariant 00:11:34.333 [2024-05-13 06:03:42.420164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.333 [2024-05-13 06:03:42.503388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.333 [2024-05-13 06:03:42.558770] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:34.333 Running I/O for 1 seconds... 00:11:35.706 00:11:35.706 Latency(us) 00:11:35.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.706 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:35.706 Nvme0n1 : 1.00 70931.01 277.07 0.00 0.00 1802.75 339.16 15194.43 00:11:35.706 =================================================================================================================== 00:11:35.706 Total : 70931.01 277.07 0.00 0.00 1802.75 339.16 15194.43 00:11:35.706 00:11:35.706 real 0m1.777s 00:11:35.706 user 0m1.317s 00:11:35.706 sys 0m0.460s 00:11:35.706 06:03:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.706 06:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 ************************************ 00:11:35.706 END TEST bdev_write_zeroes 00:11:35.706 ************************************ 00:11:35.706 06:03:43 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:35.706 06:03:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:35.706 06:03:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.706 06:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:35.706 ************************************ 00:11:35.706 START TEST bdev_json_nonenclosed 00:11:35.706 ************************************ 00:11:35.706 06:03:43 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:35.706 [2024-05-13 06:03:43.835270] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:35.706 [2024-05-13 06:03:43.835651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:36.275 EAL: TSC is not safe to use in SMP mode 00:11:36.275 EAL: TSC is not invariant 00:11:36.275 [2024-05-13 06:03:44.571774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.534 [2024-05-13 06:03:44.663357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.534 [2024-05-13 06:03:44.663418] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:36.534 [2024-05-13 06:03:44.663426] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:36.534 00:11:36.534 real 0m0.928s 00:11:36.534 user 0m0.146s 00:11:36.534 sys 0m0.780s 00:11:36.534 06:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:36.534 06:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 ************************************ 00:11:36.534 END TEST bdev_json_nonenclosed 00:11:36.534 ************************************ 00:11:36.534 06:03:44 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:36.534 06:03:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:36.534 06:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:36.534 06:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:36.534 ************************************ 00:11:36.534 START TEST bdev_json_nonarray 00:11:36.534 ************************************ 00:11:36.534 06:03:44 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /usr/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:36.534 [2024-05-13 06:03:44.820675] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:36.534 [2024-05-13 06:03:44.821017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:37.472 EAL: TSC is not safe to use in SMP mode 00:11:37.472 EAL: TSC is not invariant 00:11:37.472 [2024-05-13 06:03:45.556818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.472 [2024-05-13 06:03:45.648564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.472 [2024-05-13 06:03:45.648625] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:37.472 [2024-05-13 06:03:45.648633] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:37.472 00:11:37.472 real 0m0.928s 00:11:37.472 user 0m0.148s 00:11:37.472 sys 0m0.779s 00:11:37.472 06:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.472 06:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:37.472 ************************************ 00:11:37.472 END TEST bdev_json_nonarray 00:11:37.472 ************************************ 00:11:37.472 06:03:45 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:11:37.472 06:03:45 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:11:37.472 06:03:45 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:11:37.472 06:03:45 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:37.472 06:03:45 -- bdev/blockdev.sh@809 -- # cleanup 00:11:37.472 06:03:45 -- bdev/blockdev.sh@21 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:37.731 06:03:45 -- bdev/blockdev.sh@22 -- # rm -f /usr/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:37.731 06:03:45 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:11:37.731 06:03:45 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:11:37.731 06:03:45 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:11:37.731 06:03:45 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:11:37.731 00:11:37.731 real 0m52.612s 00:11:37.731 user 1m33.500s 00:11:37.731 sys 0m4.976s 00:11:37.731 06:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.731 06:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:37.731 ************************************ 00:11:37.731 END TEST blockdev_nvme 00:11:37.731 ************************************ 00:11:37.731 06:03:45 -- spdk/autotest.sh@219 -- # uname -s 00:11:37.731 06:03:45 -- spdk/autotest.sh@219 -- # [[ FreeBSD == Linux ]] 00:11:37.731 06:03:45 -- spdk/autotest.sh@222 -- # run_test nvme /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:37.731 06:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:37.731 06:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.731 06:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:37.731 ************************************ 00:11:37.731 START TEST nvme 00:11:37.731 ************************************ 00:11:37.731 06:03:45 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:37.731 * Looking for test storage... 00:11:37.731 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:37.731 06:03:46 -- nvme/nvme.sh@77 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:37.992 hw.nic_uio.bdfs="0:6:0" 00:11:37.992 06:03:46 -- nvme/nvme.sh@79 -- # uname 00:11:37.992 06:03:46 -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:11:37.992 06:03:46 -- nvme/nvme.sh@84 -- # run_test nvme_reset /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:37.992 06:03:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:37.992 06:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:37.992 06:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:37.992 ************************************ 00:11:37.992 START TEST nvme_reset 00:11:37.992 ************************************ 00:11:37.992 06:03:46 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:38.560 EAL: TSC is not safe to use in SMP mode 00:11:38.560 EAL: TSC is not invariant 00:11:38.560 [2024-05-13 06:03:46.687639] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:38.560 Initializing NVMe Controllers 00:11:38.560 Skipping QEMU NVMe SSD at 0000:00:06.0 00:11:38.560 No NVMe controller found, /usr/home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:38.560 00:11:38.560 real 0m0.478s 00:11:38.560 user 0m0.008s 00:11:38.560 sys 0m0.472s 00:11:38.560 06:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.560 06:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:38.560 ************************************ 00:11:38.560 END TEST nvme_reset 00:11:38.560 ************************************ 00:11:38.561 06:03:46 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:38.561 06:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:38.561 06:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.561 06:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:38.561 ************************************ 00:11:38.561 START TEST nvme_identify 00:11:38.561 ************************************ 00:11:38.561 06:03:46 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:38.561 06:03:46 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:38.561 06:03:46 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:38.561 06:03:46 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:38.561 06:03:46 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:38.561 06:03:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:38.561 06:03:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:38.561 06:03:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:38.561 06:03:46 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:38.561 06:03:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:38.561 06:03:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:38.561 06:03:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:38.561 06:03:46 -- nvme/nvme.sh@14 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:39.130 EAL: TSC is not safe to use in SMP mode 00:11:39.130 EAL: TSC is not invariant 00:11:39.130 [2024-05-13 06:03:47.286013] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:39.130 ===================================================== 00:11:39.130 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:39.130 ===================================================== 00:11:39.130 Controller Capabilities/Features 00:11:39.130 ================================ 00:11:39.130 Vendor ID: 1b36 00:11:39.130 Subsystem Vendor ID: 1af4 00:11:39.130 Serial Number: 12340 00:11:39.130 Model Number: QEMU NVMe Ctrl 00:11:39.130 Firmware Version: 8.0.0 00:11:39.130 Recommended Arb Burst: 6 00:11:39.130 IEEE OUI Identifier: 00 54 52 00:11:39.130 Multi-path I/O 00:11:39.130 May have multiple subsystem ports: No 00:11:39.130 May have multiple controllers: No 00:11:39.130 Associated with SR-IOV VF: No 00:11:39.130 Max Data Transfer Size: 524288 00:11:39.130 Max Number of Namespaces: 256 00:11:39.130 Max Number of I/O Queues: 64 00:11:39.130 NVMe Specification Version (VS): 1.4 00:11:39.130 NVMe Specification Version (Identify): 1.4 00:11:39.130 Maximum Queue Entries: 2048 00:11:39.130 Contiguous Queues Required: Yes 00:11:39.130 Arbitration Mechanisms Supported 00:11:39.130 Weighted Round Robin: Not Supported 00:11:39.130 Vendor Specific: Not Supported 00:11:39.130 Reset Timeout: 7500 ms 00:11:39.130 Doorbell Stride: 4 bytes 00:11:39.130 NVM Subsystem Reset: Not Supported 00:11:39.130 Command Sets Supported 00:11:39.130 NVM Command Set: Supported 00:11:39.130 Boot Partition: Not Supported 00:11:39.130 Memory Page Size Minimum: 4096 bytes 00:11:39.130 Memory Page Size Maximum: 65536 bytes 00:11:39.130 Persistent Memory Region: Not Supported 00:11:39.130 Optional Asynchronous Events Supported 00:11:39.130 Namespace Attribute Notices: Supported 00:11:39.130 Firmware Activation Notices: Not Supported 00:11:39.130 ANA Change Notices: Not Supported 00:11:39.130 PLE Aggregate Log Change Notices: Not Supported 00:11:39.130 LBA Status Info Alert Notices: Not Supported 00:11:39.130 EGE Aggregate Log Change Notices: Not Supported 00:11:39.130 Normal NVM Subsystem Shutdown event: Not Supported 00:11:39.130 Zone Descriptor Change Notices: Not Supported 00:11:39.130 Discovery Log Change Notices: Not Supported 00:11:39.130 Controller Attributes 00:11:39.130 128-bit Host Identifier: Not Supported 00:11:39.130 Non-Operational Permissive Mode: Not Supported 00:11:39.130 NVM Sets: Not Supported 00:11:39.130 Read Recovery Levels: Not Supported 00:11:39.130 Endurance Groups: Not Supported 00:11:39.130 Predictable Latency Mode: Not Supported 00:11:39.130 Traffic Based Keep ALive: Not Supported 00:11:39.130 Namespace Granularity: Not Supported 00:11:39.130 SQ Associations: Not Supported 00:11:39.130 UUID List: Not Supported 00:11:39.130 Multi-Domain Subsystem: Not Supported 00:11:39.130 Fixed Capacity Management: Not Supported 00:11:39.130 Variable Capacity Management: Not Supported 00:11:39.130 Delete Endurance Group: Not Supported 00:11:39.130 Delete NVM Set: Not Supported 00:11:39.130 Extended LBA Formats Supported: Supported 00:11:39.130 Flexible Data Placement Supported: Not Supported 00:11:39.130 00:11:39.130 Controller Memory Buffer Support 00:11:39.130 ================================ 00:11:39.130 Supported: No 00:11:39.130 00:11:39.130 Persistent Memory Region Support 00:11:39.130 ================================ 00:11:39.130 Supported: No 00:11:39.130 00:11:39.130 Admin Command Set Attributes 00:11:39.130 ============================ 00:11:39.130 Security Send/Receive: Not Supported 00:11:39.130 Format NVM: Supported 00:11:39.130 Firmware Activate/Download: Not Supported 00:11:39.130 Namespace Management: Supported 00:11:39.130 Device Self-Test: Not Supported 00:11:39.130 Directives: Supported 00:11:39.130 NVMe-MI: Not Supported 00:11:39.130 Virtualization Management: Not Supported 00:11:39.130 Doorbell Buffer Config: Supported 00:11:39.130 Get LBA Status Capability: Not Supported 00:11:39.130 Command & Feature Lockdown Capability: Not Supported 00:11:39.130 Abort Command Limit: 4 00:11:39.130 Async Event Request Limit: 4 00:11:39.130 Number of Firmware Slots: N/A 00:11:39.130 Firmware Slot 1 Read-Only: N/A 00:11:39.130 Firmware Activation Without Reset: N/A 00:11:39.130 Multiple Update Detection Support: N/A 00:11:39.130 Firmware Update Granularity: No Information Provided 00:11:39.130 Per-Namespace SMART Log: Yes 00:11:39.130 Asymmetric Namespace Access Log Page: Not Supported 00:11:39.130 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:39.130 Command Effects Log Page: Supported 00:11:39.130 Get Log Page Extended Data: Supported 00:11:39.130 Telemetry Log Pages: Not Supported 00:11:39.130 Persistent Event Log Pages: Not Supported 00:11:39.130 Supported Log Pages Log Page: May Support 00:11:39.130 Commands Supported & Effects Log Page: Not Supported 00:11:39.130 Feature Identifiers & Effects Log Page:May Support 00:11:39.130 NVMe-MI Commands & Effects Log Page: May Support 00:11:39.130 Data Area 4 for Telemetry Log: Not Supported 00:11:39.130 Error Log Page Entries Supported: 1 00:11:39.131 Keep Alive: Not Supported 00:11:39.131 00:11:39.131 NVM Command Set Attributes 00:11:39.131 ========================== 00:11:39.131 Submission Queue Entry Size 00:11:39.131 Max: 64 00:11:39.131 Min: 64 00:11:39.131 Completion Queue Entry Size 00:11:39.131 Max: 16 00:11:39.131 Min: 16 00:11:39.131 Number of Namespaces: 256 00:11:39.131 Compare Command: Supported 00:11:39.131 Write Uncorrectable Command: Not Supported 00:11:39.131 Dataset Management Command: Supported 00:11:39.131 Write Zeroes Command: Supported 00:11:39.131 Set Features Save Field: Supported 00:11:39.131 Reservations: Not Supported 00:11:39.131 Timestamp: Supported 00:11:39.131 Copy: Supported 00:11:39.131 Volatile Write Cache: Present 00:11:39.131 Atomic Write Unit (Normal): 1 00:11:39.131 Atomic Write Unit (PFail): 1 00:11:39.131 Atomic Compare & Write Unit: 1 00:11:39.131 Fused Compare & Write: Not Supported 00:11:39.131 Scatter-Gather List 00:11:39.131 SGL Command Set: Supported 00:11:39.131 SGL Keyed: Not Supported 00:11:39.131 SGL Bit Bucket Descriptor: Not Supported 00:11:39.131 SGL Metadata Pointer: Not Supported 00:11:39.131 Oversized SGL: Not Supported 00:11:39.131 SGL Metadata Address: Not Supported 00:11:39.131 SGL Offset: Not Supported 00:11:39.131 Transport SGL Data Block: Not Supported 00:11:39.131 Replay Protected Memory Block: Not Supported 00:11:39.131 00:11:39.131 Firmware Slot Information 00:11:39.131 ========================= 00:11:39.131 Active slot: 1 00:11:39.131 Slot 1 Firmware Revision: 1.0 00:11:39.131 00:11:39.131 00:11:39.131 Commands Supported and Effects 00:11:39.131 ============================== 00:11:39.131 Admin Commands 00:11:39.131 -------------- 00:11:39.131 Delete I/O Submission Queue (00h): Supported 00:11:39.131 Create I/O Submission Queue (01h): Supported 00:11:39.131 Get Log Page (02h): Supported 00:11:39.131 Delete I/O Completion Queue (04h): Supported 00:11:39.131 Create I/O Completion Queue (05h): Supported 00:11:39.131 Identify (06h): Supported 00:11:39.131 Abort (08h): Supported 00:11:39.131 Set Features (09h): Supported 00:11:39.131 Get Features (0Ah): Supported 00:11:39.131 Asynchronous Event Request (0Ch): Supported 00:11:39.131 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:39.131 Directive Send (19h): Supported 00:11:39.131 Directive Receive (1Ah): Supported 00:11:39.131 Virtualization Management (1Ch): Supported 00:11:39.131 Doorbell Buffer Config (7Ch): Supported 00:11:39.131 Format NVM (80h): Supported LBA-Change 00:11:39.131 I/O Commands 00:11:39.131 ------------ 00:11:39.131 Flush (00h): Supported LBA-Change 00:11:39.131 Write (01h): Supported LBA-Change 00:11:39.131 Read (02h): Supported 00:11:39.131 Compare (05h): Supported 00:11:39.131 Write Zeroes (08h): Supported LBA-Change 00:11:39.131 Dataset Management (09h): Supported LBA-Change 00:11:39.131 Unknown (0Ch): Supported 00:11:39.131 Unknown (12h): Supported 00:11:39.131 Copy (19h): Supported LBA-Change 00:11:39.131 Unknown (1Dh): Supported LBA-Change 00:11:39.131 00:11:39.131 Error Log 00:11:39.131 ========= 00:11:39.131 00:11:39.131 Arbitration 00:11:39.131 =========== 00:11:39.131 Arbitration Burst: no limit 00:11:39.131 00:11:39.131 Power Management 00:11:39.131 ================ 00:11:39.131 Number of Power States: 1 00:11:39.131 Current Power State: Power State #0 00:11:39.131 Power State #0: 00:11:39.131 Max Power: 25.00 W 00:11:39.131 Non-Operational State: Operational 00:11:39.131 Entry Latency: 16 microseconds 00:11:39.131 Exit Latency: 4 microseconds 00:11:39.131 Relative Read Throughput: 0 00:11:39.131 Relative Read Latency: 0 00:11:39.131 Relative Write Throughput: 0 00:11:39.131 Relative Write Latency: 0 00:11:39.131 Idle Power: Not Reported 00:11:39.131 Active Power: Not Reported 00:11:39.131 Non-Operational Permissive Mode: Not Supported 00:11:39.131 00:11:39.131 Health Information 00:11:39.131 ================== 00:11:39.131 Critical Warnings: 00:11:39.131 Available Spare Space: OK 00:11:39.131 Temperature: OK 00:11:39.131 Device Reliability: OK 00:11:39.131 Read Only: No 00:11:39.131 Volatile Memory Backup: OK 00:11:39.131 Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.131 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:39.131 Available Spare: 0% 00:11:39.131 Available Spare Threshold: 0% 00:11:39.131 Life Percentage Used: 0% 00:11:39.131 Data Units Read: 25835 00:11:39.131 Data Units Written: 13004 00:11:39.131 Host Read Commands: 511847 00:11:39.131 Host Write Commands: 256856 00:11:39.131 Controller Busy Time: 0 minutes 00:11:39.131 Power Cycles: 0 00:11:39.131 Power On Hours: 0 hours 00:11:39.131 Unsafe Shutdowns: 0 00:11:39.131 Unrecoverable Media Errors: 0 00:11:39.131 Lifetime Error Log Entries: 0 00:11:39.131 Warning Temperature Time: 0 minutes 00:11:39.131 Critical Temperature Time: 0 minutes 00:11:39.131 00:11:39.131 Number of Queues 00:11:39.131 ================ 00:11:39.131 Number of I/O Submission Queues: 64 00:11:39.131 Number of I/O Completion Queues: 64 00:11:39.131 00:11:39.131 ZNS Specific Controller Data 00:11:39.131 ============================ 00:11:39.131 Zone Append Size Limit: 0 00:11:39.131 00:11:39.131 00:11:39.131 Active Namespaces 00:11:39.131 ================= 00:11:39.131 Namespace ID:1 00:11:39.131 Error Recovery Timeout: Unlimited 00:11:39.131 Command Set Identifier: NVM (00h) 00:11:39.131 Deallocate: Supported 00:11:39.131 Deallocated/Unwritten Error: Supported 00:11:39.131 Deallocated Read Value: All 0x00 00:11:39.131 Deallocate in Write Zeroes: Not Supported 00:11:39.131 Deallocated Guard Field: 0xFFFF 00:11:39.131 Flush: Supported 00:11:39.131 Reservation: Not Supported 00:11:39.131 Namespace Sharing Capabilities: Private 00:11:39.131 Size (in LBAs): 1310720 (5GiB) 00:11:39.131 Capacity (in LBAs): 1310720 (5GiB) 00:11:39.131 Utilization (in LBAs): 1310720 (5GiB) 00:11:39.131 Thin Provisioning: Not Supported 00:11:39.131 Per-NS Atomic Units: No 00:11:39.131 Maximum Single Source Range Length: 128 00:11:39.131 Maximum Copy Length: 128 00:11:39.131 Maximum Source Range Count: 128 00:11:39.131 NGUID/EUI64 Never Reused: No 00:11:39.131 Namespace Write Protected: No 00:11:39.131 Number of LBA Formats: 8 00:11:39.131 Current LBA Format: LBA Format #04 00:11:39.131 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:39.131 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:39.131 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:39.131 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:39.131 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:39.131 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:39.131 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:39.131 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:39.131 00:11:39.131 06:03:47 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:39.131 06:03:47 -- nvme/nvme.sh@16 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:11:39.698 EAL: TSC is not safe to use in SMP mode 00:11:39.698 EAL: TSC is not invariant 00:11:39.698 [2024-05-13 06:03:47.753817] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:39.698 ===================================================== 00:11:39.698 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:39.698 ===================================================== 00:11:39.698 Controller Capabilities/Features 00:11:39.698 ================================ 00:11:39.698 Vendor ID: 1b36 00:11:39.698 Subsystem Vendor ID: 1af4 00:11:39.698 Serial Number: 12340 00:11:39.698 Model Number: QEMU NVMe Ctrl 00:11:39.698 Firmware Version: 8.0.0 00:11:39.698 Recommended Arb Burst: 6 00:11:39.698 IEEE OUI Identifier: 00 54 52 00:11:39.698 Multi-path I/O 00:11:39.698 May have multiple subsystem ports: No 00:11:39.698 May have multiple controllers: No 00:11:39.698 Associated with SR-IOV VF: No 00:11:39.698 Max Data Transfer Size: 524288 00:11:39.698 Max Number of Namespaces: 256 00:11:39.698 Max Number of I/O Queues: 64 00:11:39.698 NVMe Specification Version (VS): 1.4 00:11:39.698 NVMe Specification Version (Identify): 1.4 00:11:39.698 Maximum Queue Entries: 2048 00:11:39.698 Contiguous Queues Required: Yes 00:11:39.698 Arbitration Mechanisms Supported 00:11:39.698 Weighted Round Robin: Not Supported 00:11:39.698 Vendor Specific: Not Supported 00:11:39.698 Reset Timeout: 7500 ms 00:11:39.698 Doorbell Stride: 4 bytes 00:11:39.698 NVM Subsystem Reset: Not Supported 00:11:39.698 Command Sets Supported 00:11:39.698 NVM Command Set: Supported 00:11:39.698 Boot Partition: Not Supported 00:11:39.698 Memory Page Size Minimum: 4096 bytes 00:11:39.698 Memory Page Size Maximum: 65536 bytes 00:11:39.698 Persistent Memory Region: Not Supported 00:11:39.698 Optional Asynchronous Events Supported 00:11:39.698 Namespace Attribute Notices: Supported 00:11:39.698 Firmware Activation Notices: Not Supported 00:11:39.698 ANA Change Notices: Not Supported 00:11:39.698 PLE Aggregate Log Change Notices: Not Supported 00:11:39.698 LBA Status Info Alert Notices: Not Supported 00:11:39.698 EGE Aggregate Log Change Notices: Not Supported 00:11:39.698 Normal NVM Subsystem Shutdown event: Not Supported 00:11:39.698 Zone Descriptor Change Notices: Not Supported 00:11:39.698 Discovery Log Change Notices: Not Supported 00:11:39.698 Controller Attributes 00:11:39.698 128-bit Host Identifier: Not Supported 00:11:39.698 Non-Operational Permissive Mode: Not Supported 00:11:39.698 NVM Sets: Not Supported 00:11:39.698 Read Recovery Levels: Not Supported 00:11:39.698 Endurance Groups: Not Supported 00:11:39.698 Predictable Latency Mode: Not Supported 00:11:39.698 Traffic Based Keep ALive: Not Supported 00:11:39.698 Namespace Granularity: Not Supported 00:11:39.698 SQ Associations: Not Supported 00:11:39.698 UUID List: Not Supported 00:11:39.698 Multi-Domain Subsystem: Not Supported 00:11:39.698 Fixed Capacity Management: Not Supported 00:11:39.698 Variable Capacity Management: Not Supported 00:11:39.698 Delete Endurance Group: Not Supported 00:11:39.698 Delete NVM Set: Not Supported 00:11:39.698 Extended LBA Formats Supported: Supported 00:11:39.698 Flexible Data Placement Supported: Not Supported 00:11:39.698 00:11:39.698 Controller Memory Buffer Support 00:11:39.698 ================================ 00:11:39.698 Supported: No 00:11:39.698 00:11:39.698 Persistent Memory Region Support 00:11:39.698 ================================ 00:11:39.698 Supported: No 00:11:39.698 00:11:39.698 Admin Command Set Attributes 00:11:39.698 ============================ 00:11:39.698 Security Send/Receive: Not Supported 00:11:39.698 Format NVM: Supported 00:11:39.698 Firmware Activate/Download: Not Supported 00:11:39.698 Namespace Management: Supported 00:11:39.698 Device Self-Test: Not Supported 00:11:39.698 Directives: Supported 00:11:39.698 NVMe-MI: Not Supported 00:11:39.698 Virtualization Management: Not Supported 00:11:39.698 Doorbell Buffer Config: Supported 00:11:39.698 Get LBA Status Capability: Not Supported 00:11:39.698 Command & Feature Lockdown Capability: Not Supported 00:11:39.698 Abort Command Limit: 4 00:11:39.698 Async Event Request Limit: 4 00:11:39.698 Number of Firmware Slots: N/A 00:11:39.698 Firmware Slot 1 Read-Only: N/A 00:11:39.698 Firmware Activation Without Reset: N/A 00:11:39.698 Multiple Update Detection Support: N/A 00:11:39.698 Firmware Update Granularity: No Information Provided 00:11:39.698 Per-Namespace SMART Log: Yes 00:11:39.698 Asymmetric Namespace Access Log Page: Not Supported 00:11:39.698 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:39.699 Command Effects Log Page: Supported 00:11:39.699 Get Log Page Extended Data: Supported 00:11:39.699 Telemetry Log Pages: Not Supported 00:11:39.699 Persistent Event Log Pages: Not Supported 00:11:39.699 Supported Log Pages Log Page: May Support 00:11:39.699 Commands Supported & Effects Log Page: Not Supported 00:11:39.699 Feature Identifiers & Effects Log Page:May Support 00:11:39.699 NVMe-MI Commands & Effects Log Page: May Support 00:11:39.699 Data Area 4 for Telemetry Log: Not Supported 00:11:39.699 Error Log Page Entries Supported: 1 00:11:39.699 Keep Alive: Not Supported 00:11:39.699 00:11:39.699 NVM Command Set Attributes 00:11:39.699 ========================== 00:11:39.699 Submission Queue Entry Size 00:11:39.699 Max: 64 00:11:39.699 Min: 64 00:11:39.699 Completion Queue Entry Size 00:11:39.699 Max: 16 00:11:39.699 Min: 16 00:11:39.699 Number of Namespaces: 256 00:11:39.699 Compare Command: Supported 00:11:39.699 Write Uncorrectable Command: Not Supported 00:11:39.699 Dataset Management Command: Supported 00:11:39.699 Write Zeroes Command: Supported 00:11:39.699 Set Features Save Field: Supported 00:11:39.699 Reservations: Not Supported 00:11:39.699 Timestamp: Supported 00:11:39.699 Copy: Supported 00:11:39.699 Volatile Write Cache: Present 00:11:39.699 Atomic Write Unit (Normal): 1 00:11:39.699 Atomic Write Unit (PFail): 1 00:11:39.699 Atomic Compare & Write Unit: 1 00:11:39.699 Fused Compare & Write: Not Supported 00:11:39.699 Scatter-Gather List 00:11:39.699 SGL Command Set: Supported 00:11:39.699 SGL Keyed: Not Supported 00:11:39.699 SGL Bit Bucket Descriptor: Not Supported 00:11:39.699 SGL Metadata Pointer: Not Supported 00:11:39.699 Oversized SGL: Not Supported 00:11:39.699 SGL Metadata Address: Not Supported 00:11:39.699 SGL Offset: Not Supported 00:11:39.699 Transport SGL Data Block: Not Supported 00:11:39.699 Replay Protected Memory Block: Not Supported 00:11:39.699 00:11:39.699 Firmware Slot Information 00:11:39.699 ========================= 00:11:39.699 Active slot: 1 00:11:39.699 Slot 1 Firmware Revision: 1.0 00:11:39.699 00:11:39.699 00:11:39.699 Commands Supported and Effects 00:11:39.699 ============================== 00:11:39.699 Admin Commands 00:11:39.699 -------------- 00:11:39.699 Delete I/O Submission Queue (00h): Supported 00:11:39.699 Create I/O Submission Queue (01h): Supported 00:11:39.699 Get Log Page (02h): Supported 00:11:39.699 Delete I/O Completion Queue (04h): Supported 00:11:39.699 Create I/O Completion Queue (05h): Supported 00:11:39.699 Identify (06h): Supported 00:11:39.699 Abort (08h): Supported 00:11:39.699 Set Features (09h): Supported 00:11:39.699 Get Features (0Ah): Supported 00:11:39.699 Asynchronous Event Request (0Ch): Supported 00:11:39.699 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:39.699 Directive Send (19h): Supported 00:11:39.699 Directive Receive (1Ah): Supported 00:11:39.699 Virtualization Management (1Ch): Supported 00:11:39.699 Doorbell Buffer Config (7Ch): Supported 00:11:39.699 Format NVM (80h): Supported LBA-Change 00:11:39.699 I/O Commands 00:11:39.699 ------------ 00:11:39.699 Flush (00h): Supported LBA-Change 00:11:39.699 Write (01h): Supported LBA-Change 00:11:39.699 Read (02h): Supported 00:11:39.699 Compare (05h): Supported 00:11:39.699 Write Zeroes (08h): Supported LBA-Change 00:11:39.699 Dataset Management (09h): Supported LBA-Change 00:11:39.699 Unknown (0Ch): Supported 00:11:39.699 Unknown (12h): Supported 00:11:39.699 Copy (19h): Supported LBA-Change 00:11:39.699 Unknown (1Dh): Supported LBA-Change 00:11:39.699 00:11:39.699 Error Log 00:11:39.699 ========= 00:11:39.699 00:11:39.699 Arbitration 00:11:39.699 =========== 00:11:39.699 Arbitration Burst: no limit 00:11:39.699 00:11:39.699 Power Management 00:11:39.699 ================ 00:11:39.699 Number of Power States: 1 00:11:39.699 Current Power State: Power State #0 00:11:39.699 Power State #0: 00:11:39.699 Max Power: 25.00 W 00:11:39.699 Non-Operational State: Operational 00:11:39.699 Entry Latency: 16 microseconds 00:11:39.699 Exit Latency: 4 microseconds 00:11:39.699 Relative Read Throughput: 0 00:11:39.699 Relative Read Latency: 0 00:11:39.699 Relative Write Throughput: 0 00:11:39.699 Relative Write Latency: 0 00:11:39.699 Idle Power: Not Reported 00:11:39.699 Active Power: Not Reported 00:11:39.699 Non-Operational Permissive Mode: Not Supported 00:11:39.699 00:11:39.699 Health Information 00:11:39.699 ================== 00:11:39.699 Critical Warnings: 00:11:39.699 Available Spare Space: OK 00:11:39.699 Temperature: OK 00:11:39.699 Device Reliability: OK 00:11:39.699 Read Only: No 00:11:39.699 Volatile Memory Backup: OK 00:11:39.699 Current Temperature: 323 Kelvin (50 Celsius) 00:11:39.699 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:39.699 Available Spare: 0% 00:11:39.699 Available Spare Threshold: 0% 00:11:39.699 Life Percentage Used: 0% 00:11:39.699 Data Units Read: 25835 00:11:39.699 Data Units Written: 13004 00:11:39.699 Host Read Commands: 511847 00:11:39.699 Host Write Commands: 256856 00:11:39.699 Controller Busy Time: 0 minutes 00:11:39.699 Power Cycles: 0 00:11:39.699 Power On Hours: 0 hours 00:11:39.699 Unsafe Shutdowns: 0 00:11:39.699 Unrecoverable Media Errors: 0 00:11:39.699 Lifetime Error Log Entries: 0 00:11:39.699 Warning Temperature Time: 0 minutes 00:11:39.699 Critical Temperature Time: 0 minutes 00:11:39.699 00:11:39.699 Number of Queues 00:11:39.699 ================ 00:11:39.699 Number of I/O Submission Queues: 64 00:11:39.699 Number of I/O Completion Queues: 64 00:11:39.699 00:11:39.699 ZNS Specific Controller Data 00:11:39.699 ============================ 00:11:39.699 Zone Append Size Limit: 0 00:11:39.699 00:11:39.699 00:11:39.699 Active Namespaces 00:11:39.699 ================= 00:11:39.699 Namespace ID:1 00:11:39.699 Error Recovery Timeout: Unlimited 00:11:39.699 Command Set Identifier: NVM (00h) 00:11:39.699 Deallocate: Supported 00:11:39.699 Deallocated/Unwritten Error: Supported 00:11:39.699 Deallocated Read Value: All 0x00 00:11:39.699 Deallocate in Write Zeroes: Not Supported 00:11:39.699 Deallocated Guard Field: 0xFFFF 00:11:39.699 Flush: Supported 00:11:39.699 Reservation: Not Supported 00:11:39.699 Namespace Sharing Capabilities: Private 00:11:39.699 Size (in LBAs): 1310720 (5GiB) 00:11:39.699 Capacity (in LBAs): 1310720 (5GiB) 00:11:39.699 Utilization (in LBAs): 1310720 (5GiB) 00:11:39.699 Thin Provisioning: Not Supported 00:11:39.699 Per-NS Atomic Units: No 00:11:39.699 Maximum Single Source Range Length: 128 00:11:39.699 Maximum Copy Length: 128 00:11:39.699 Maximum Source Range Count: 128 00:11:39.699 NGUID/EUI64 Never Reused: No 00:11:39.699 Namespace Write Protected: No 00:11:39.699 Number of LBA Formats: 8 00:11:39.699 Current LBA Format: LBA Format #04 00:11:39.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:39.699 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:39.699 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:39.699 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:39.699 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:39.699 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:39.699 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:39.699 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:39.699 00:11:39.699 00:11:39.699 real 0m1.019s 00:11:39.699 user 0m0.095s 00:11:39.699 sys 0m0.950s 00:11:39.699 06:03:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.699 06:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:39.699 ************************************ 00:11:39.699 END TEST nvme_identify 00:11:39.699 ************************************ 00:11:39.699 06:03:47 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:39.699 06:03:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:39.699 06:03:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.699 06:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:39.699 ************************************ 00:11:39.699 START TEST nvme_perf 00:11:39.699 ************************************ 00:11:39.699 06:03:47 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:39.699 06:03:47 -- nvme/nvme.sh@22 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:40.266 EAL: TSC is not safe to use in SMP mode 00:11:40.266 EAL: TSC is not invariant 00:11:40.266 [2024-05-13 06:03:48.280203] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:41.203 Initializing NVMe Controllers 00:11:41.203 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:41.203 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:41.203 Initialization complete. Launching workers. 00:11:41.203 ======================================================== 00:11:41.203 Latency(us) 00:11:41.203 Device Information : IOPS MiB/s Average min max 00:11:41.203 PCIE (0000:00:06.0) NSID 1 from core 0: 103615.96 1214.25 1235.82 246.82 3817.50 00:11:41.203 ======================================================== 00:11:41.203 Total : 103615.96 1214.25 1235.82 246.82 3817.50 00:11:41.203 00:11:41.203 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:41.203 ================================================================================= 00:11:41.203 1.00000% : 1042.475us 00:11:41.203 10.00000% : 1113.878us 00:11:41.203 25.00000% : 1156.719us 00:11:41.203 50.00000% : 1220.981us 00:11:41.203 75.00000% : 1285.243us 00:11:41.203 90.00000% : 1335.225us 00:11:41.203 95.00000% : 1370.926us 00:11:41.203 98.00000% : 1528.012us 00:11:41.203 99.00000% : 2127.792us 00:11:41.203 99.50000% : 2756.133us 00:11:41.203 99.90000% : 3127.426us 00:11:41.203 99.99000% : 3284.511us 00:11:41.203 99.99900% : 3612.962us 00:11:41.203 99.99990% : 3827.169us 00:11:41.203 99.99999% : 3827.169us 00:11:41.203 00:11:41.203 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:41.203 ============================================================================== 00:11:41.203 Range in us Cumulative IO count 00:11:41.203 246.338 - 248.123: 0.0010% ( 1) 00:11:41.203 276.684 - 278.469: 0.0019% ( 1) 00:11:41.203 278.469 - 280.254: 0.0029% ( 1) 00:11:41.203 444.480 - 446.265: 0.0058% ( 3) 00:11:41.203 446.265 - 448.050: 0.0077% ( 2) 00:11:41.203 878.250 - 881.820: 0.0087% ( 1) 00:11:41.203 881.820 - 885.390: 0.0116% ( 3) 00:11:41.203 885.390 - 888.960: 0.0145% ( 3) 00:11:41.203 888.960 - 892.530: 0.0164% ( 2) 00:11:41.203 892.530 - 896.100: 0.0193% ( 3) 00:11:41.203 896.100 - 899.670: 0.0222% ( 3) 00:11:41.203 899.670 - 903.240: 0.0251% ( 3) 00:11:41.203 903.240 - 906.811: 0.0280% ( 3) 00:11:41.203 906.811 - 910.381: 0.0299% ( 2) 00:11:41.203 910.381 - 913.951: 0.0328% ( 3) 00:11:41.203 913.951 - 921.091: 0.0386% ( 6) 00:11:41.203 921.091 - 928.231: 0.0444% ( 6) 00:11:41.203 928.231 - 935.372: 0.0492% ( 5) 00:11:41.203 935.372 - 942.512: 0.0550% ( 6) 00:11:41.203 942.512 - 949.652: 0.0608% ( 6) 00:11:41.203 949.652 - 956.792: 0.0656% ( 5) 00:11:41.203 956.792 - 963.933: 0.0695% ( 4) 00:11:41.203 963.933 - 971.073: 0.0714% ( 2) 00:11:41.203 971.073 - 978.213: 0.0753% ( 4) 00:11:41.203 978.213 - 985.353: 0.0830% ( 8) 00:11:41.203 985.353 - 992.494: 0.0994% ( 17) 00:11:41.203 992.494 - 999.634: 0.1332% ( 35) 00:11:41.203 999.634 - 1006.774: 0.1843% ( 53) 00:11:41.203 1006.774 - 1013.914: 0.2644% ( 83) 00:11:41.203 1013.914 - 1021.054: 0.3706% ( 110) 00:11:41.203 1021.054 - 1028.195: 0.5298% ( 165) 00:11:41.203 1028.195 - 1035.335: 0.7643% ( 243) 00:11:41.203 1035.335 - 1042.475: 1.0548% ( 301) 00:11:41.203 1042.475 - 1049.615: 1.4669% ( 427) 00:11:41.203 1049.615 - 1056.756: 2.0073% ( 560) 00:11:41.203 1056.756 - 1063.896: 2.6790% ( 696) 00:11:41.203 1063.896 - 1071.036: 3.4810% ( 831) 00:11:41.203 1071.036 - 1078.176: 4.4142% ( 967) 00:11:41.203 1078.176 - 1085.317: 5.5260% ( 1152) 00:11:41.203 1085.317 - 1092.457: 6.8037% ( 1324) 00:11:41.203 1092.457 - 1099.597: 8.2117% ( 1459) 00:11:41.203 1099.597 - 1106.737: 9.8601% ( 1708) 00:11:41.203 1106.737 - 1113.878: 11.7004% ( 1907) 00:11:41.203 1113.878 - 1121.018: 13.7087% ( 2081) 00:11:41.203 1121.018 - 1128.158: 15.8357% ( 2204) 00:11:41.203 1128.158 - 1135.298: 18.1181% ( 2365) 00:11:41.203 1135.298 - 1142.439: 20.5395% ( 2509) 00:11:41.203 1142.439 - 1149.579: 23.0718% ( 2624) 00:11:41.203 1149.579 - 1156.719: 25.6698% ( 2692) 00:11:41.203 1156.719 - 1163.859: 28.3227% ( 2749) 00:11:41.203 1163.859 - 1171.000: 31.0548% ( 2831) 00:11:41.203 1171.000 - 1178.140: 33.8429% ( 2889) 00:11:41.203 1178.140 - 1185.280: 36.6348% ( 2893) 00:11:41.203 1185.280 - 1192.420: 39.4451% ( 2912) 00:11:41.203 1192.420 - 1199.560: 42.3026% ( 2961) 00:11:41.203 1199.560 - 1206.701: 45.1969% ( 2999) 00:11:41.203 1206.701 - 1213.841: 48.0988% ( 3007) 00:11:41.203 1213.841 - 1220.981: 51.0017% ( 3008) 00:11:41.203 1220.981 - 1228.121: 53.8940% ( 2997) 00:11:41.203 1228.121 - 1235.262: 56.8105% ( 3022) 00:11:41.203 1235.262 - 1242.402: 59.6999% ( 2994) 00:11:41.203 1242.402 - 1249.542: 62.5497% ( 2953) 00:11:41.203 1249.542 - 1256.682: 65.3889% ( 2942) 00:11:41.203 1256.682 - 1263.823: 68.1548% ( 2866) 00:11:41.203 1263.823 - 1270.963: 70.8975% ( 2842) 00:11:41.203 1270.963 - 1278.103: 73.5341% ( 2732) 00:11:41.203 1278.103 - 1285.243: 76.0461% ( 2603) 00:11:41.203 1285.243 - 1292.384: 78.4462% ( 2487) 00:11:41.203 1292.384 - 1299.524: 80.7730% ( 2411) 00:11:41.203 1299.524 - 1306.664: 82.9743% ( 2281) 00:11:41.203 1306.664 - 1313.804: 84.9903% ( 2089) 00:11:41.203 1313.804 - 1320.945: 86.8761% ( 1954) 00:11:41.203 1320.945 - 1328.085: 88.5601% ( 1745) 00:11:41.203 1328.085 - 1335.225: 90.0685% ( 1563) 00:11:41.203 1335.225 - 1342.365: 91.3733% ( 1352) 00:11:41.203 1342.365 - 1349.506: 92.5304% ( 1199) 00:11:41.203 1349.506 - 1356.646: 93.5312% ( 1037) 00:11:41.203 1356.646 - 1363.786: 94.4094% ( 910) 00:11:41.203 1363.786 - 1370.926: 95.1303% ( 747) 00:11:41.203 1370.926 - 1378.067: 95.7228% ( 614) 00:11:41.203 1378.067 - 1385.207: 96.1976% ( 492) 00:11:41.203 1385.207 - 1392.347: 96.5692% ( 385) 00:11:41.203 1392.347 - 1399.487: 96.8722% ( 314) 00:11:41.203 1399.487 - 1406.627: 97.1048% ( 241) 00:11:41.204 1406.627 - 1413.768: 97.2843% ( 186) 00:11:41.204 1413.768 - 1420.908: 97.4320% ( 153) 00:11:41.204 1420.908 - 1428.048: 97.5526% ( 125) 00:11:41.204 1428.048 - 1435.188: 97.6443% ( 95) 00:11:41.204 1435.188 - 1442.329: 97.7070% ( 65) 00:11:41.204 1442.329 - 1449.469: 97.7553% ( 50) 00:11:41.204 1449.469 - 1456.609: 97.7910% ( 37) 00:11:41.204 1456.609 - 1463.749: 97.8190% ( 29) 00:11:41.204 1463.749 - 1470.890: 97.8440% ( 26) 00:11:41.204 1470.890 - 1478.030: 97.8653% ( 22) 00:11:41.204 1478.030 - 1485.170: 97.8875% ( 23) 00:11:41.204 1485.170 - 1492.310: 97.8981% ( 11) 00:11:41.204 1492.310 - 1499.451: 97.9270% ( 30) 00:11:41.204 1499.451 - 1506.591: 97.9434% ( 17) 00:11:41.204 1506.591 - 1513.731: 97.9647% ( 22) 00:11:41.204 1513.731 - 1520.871: 97.9869% ( 23) 00:11:41.204 1520.871 - 1528.012: 98.0091% ( 23) 00:11:41.204 1528.012 - 1535.152: 98.0313% ( 23) 00:11:41.204 1535.152 - 1542.292: 98.0544% ( 24) 00:11:41.204 1542.292 - 1549.432: 98.0872% ( 34) 00:11:41.204 1549.432 - 1556.573: 98.1355% ( 50) 00:11:41.204 1556.573 - 1563.713: 98.1799% ( 46) 00:11:41.204 1563.713 - 1570.853: 98.2252% ( 47) 00:11:41.204 1570.853 - 1577.993: 98.2590% ( 35) 00:11:41.204 1577.993 - 1585.134: 98.2986% ( 41) 00:11:41.204 1585.134 - 1592.274: 98.3343% ( 37) 00:11:41.204 1592.274 - 1599.414: 98.3681% ( 35) 00:11:41.204 1599.414 - 1606.554: 98.4076% ( 41) 00:11:41.204 1606.554 - 1613.694: 98.4501% ( 44) 00:11:41.204 1613.694 - 1620.835: 98.4955% ( 47) 00:11:41.204 1620.835 - 1627.975: 98.5225% ( 28) 00:11:41.204 1627.975 - 1635.115: 98.5456% ( 24) 00:11:41.204 1635.115 - 1642.255: 98.5640% ( 19) 00:11:41.204 1642.255 - 1649.396: 98.5775% ( 14) 00:11:41.204 1649.396 - 1656.536: 98.5881% ( 11) 00:11:41.204 1656.536 - 1663.676: 98.5987% ( 11) 00:11:41.204 1663.676 - 1670.816: 98.6103% ( 12) 00:11:41.204 1670.816 - 1677.957: 98.6180% ( 8) 00:11:41.204 1677.957 - 1685.097: 98.6219% ( 4) 00:11:41.204 1685.097 - 1692.237: 98.6248% ( 3) 00:11:41.204 1692.237 - 1699.377: 98.6267% ( 2) 00:11:41.204 1713.658 - 1720.798: 98.6325% ( 6) 00:11:41.204 1720.798 - 1727.938: 98.6383% ( 6) 00:11:41.204 1727.938 - 1735.079: 98.6450% ( 7) 00:11:41.204 1735.079 - 1742.219: 98.6508% ( 6) 00:11:41.204 1742.219 - 1749.359: 98.6566% ( 6) 00:11:41.204 1749.359 - 1756.499: 98.6624% ( 6) 00:11:41.204 1756.499 - 1763.640: 98.6682% ( 6) 00:11:41.204 1763.640 - 1770.780: 98.6740% ( 6) 00:11:41.204 1770.780 - 1777.920: 98.6808% ( 7) 00:11:41.204 1777.920 - 1785.060: 98.6875% ( 7) 00:11:41.204 1785.060 - 1792.200: 98.6933% ( 6) 00:11:41.204 1792.200 - 1799.341: 98.7001% ( 7) 00:11:41.204 1799.341 - 1806.481: 98.7068% ( 7) 00:11:41.204 1806.481 - 1813.621: 98.7126% ( 6) 00:11:41.204 1813.621 - 1820.761: 98.7184% ( 6) 00:11:41.204 1820.761 - 1827.902: 98.7242% ( 6) 00:11:41.204 1827.902 - 1842.182: 98.7387% ( 15) 00:11:41.204 1842.182 - 1856.463: 98.7522% ( 14) 00:11:41.204 1856.463 - 1870.743: 98.7609% ( 9) 00:11:41.204 1870.743 - 1885.024: 98.7618% ( 1) 00:11:41.204 1913.585 - 1927.865: 98.7628% ( 1) 00:11:41.204 1927.865 - 1942.146: 98.7647% ( 2) 00:11:41.204 1956.426 - 1970.707: 98.7657% ( 1) 00:11:41.204 1984.987 - 1999.267: 98.7734% ( 8) 00:11:41.204 1999.267 - 2013.548: 98.7898% ( 17) 00:11:41.204 2013.548 - 2027.828: 98.8168% ( 28) 00:11:41.204 2027.828 - 2042.109: 98.8439% ( 28) 00:11:41.204 2042.109 - 2056.389: 98.8709% ( 28) 00:11:41.204 2056.389 - 2070.670: 98.8989% ( 29) 00:11:41.204 2070.670 - 2084.950: 98.9259% ( 28) 00:11:41.204 2084.950 - 2099.231: 98.9539% ( 29) 00:11:41.204 2099.231 - 2113.511: 98.9925% ( 40) 00:11:41.204 2113.511 - 2127.792: 99.0330% ( 42) 00:11:41.204 2127.792 - 2142.072: 99.0716% ( 40) 00:11:41.204 2142.072 - 2156.353: 99.0986% ( 28) 00:11:41.204 2156.353 - 2170.633: 99.1257% ( 28) 00:11:41.204 2170.633 - 2184.914: 99.1517% ( 27) 00:11:41.204 2184.914 - 2199.194: 99.1787% ( 28) 00:11:41.204 2199.194 - 2213.475: 99.2048% ( 27) 00:11:41.204 2213.475 - 2227.755: 99.2318% ( 28) 00:11:41.204 2227.755 - 2242.036: 99.2463% ( 15) 00:11:41.204 2242.036 - 2256.316: 99.2569% ( 11) 00:11:41.204 2256.316 - 2270.597: 99.2579% ( 1) 00:11:41.204 2370.560 - 2384.840: 99.2588% ( 1) 00:11:41.204 2384.840 - 2399.121: 99.2646% ( 6) 00:11:41.204 2399.121 - 2413.401: 99.2781% ( 14) 00:11:41.204 2413.401 - 2427.682: 99.2926% ( 15) 00:11:41.204 2427.682 - 2441.962: 99.3061% ( 14) 00:11:41.204 2441.962 - 2456.243: 99.3196% ( 14) 00:11:41.204 2456.243 - 2470.523: 99.3341% ( 15) 00:11:41.204 2470.523 - 2484.804: 99.3486% ( 15) 00:11:41.204 2484.804 - 2499.084: 99.3621% ( 14) 00:11:41.204 2499.084 - 2513.365: 99.3756% ( 14) 00:11:41.204 2513.365 - 2527.645: 99.3814% ( 6) 00:11:41.204 2599.048 - 2613.328: 99.3853% ( 4) 00:11:41.204 2684.731 - 2699.011: 99.3959% ( 11) 00:11:41.204 2699.011 - 2713.292: 99.4219% ( 27) 00:11:41.204 2713.292 - 2727.572: 99.4538% ( 33) 00:11:41.204 2727.572 - 2741.853: 99.4982% ( 46) 00:11:41.204 2741.853 - 2756.133: 99.5426% ( 46) 00:11:41.204 2756.133 - 2770.414: 99.5850% ( 44) 00:11:41.204 2770.414 - 2784.694: 99.6275% ( 44) 00:11:41.204 2784.694 - 2798.974: 99.6748% ( 49) 00:11:41.204 2798.974 - 2813.255: 99.7095% ( 36) 00:11:41.204 2813.255 - 2827.535: 99.7520% ( 44) 00:11:41.204 2827.535 - 2841.816: 99.7732% ( 22) 00:11:41.204 2841.816 - 2856.096: 99.7915% ( 19) 00:11:41.204 2856.096 - 2870.377: 99.8080% ( 17) 00:11:41.204 2870.377 - 2884.657: 99.8244% ( 17) 00:11:41.204 2884.657 - 2898.938: 99.8388% ( 15) 00:11:41.204 2898.938 - 2913.218: 99.8533% ( 15) 00:11:41.204 2913.218 - 2927.499: 99.8668% ( 14) 00:11:41.204 2927.499 - 2941.779: 99.8774% ( 11) 00:11:41.204 3013.182 - 3027.462: 99.8784% ( 1) 00:11:41.204 3041.743 - 3056.023: 99.8794% ( 1) 00:11:41.204 3070.304 - 3084.584: 99.8813% ( 2) 00:11:41.204 3084.584 - 3098.865: 99.8909% ( 10) 00:11:41.204 3098.865 - 3113.145: 99.8977% ( 7) 00:11:41.204 3113.145 - 3127.426: 99.9064% ( 9) 00:11:41.204 3127.426 - 3141.706: 99.9151% ( 9) 00:11:41.204 3141.706 - 3155.987: 99.9218% ( 7) 00:11:41.204 3155.987 - 3170.267: 99.9296% ( 8) 00:11:41.204 3170.267 - 3184.547: 99.9382% ( 9) 00:11:41.204 3184.547 - 3198.828: 99.9460% ( 8) 00:11:41.204 3198.828 - 3213.108: 99.9537% ( 8) 00:11:41.204 3213.108 - 3227.389: 99.9624% ( 9) 00:11:41.204 3227.389 - 3241.669: 99.9701% ( 8) 00:11:41.204 3241.669 - 3255.950: 99.9788% ( 9) 00:11:41.204 3255.950 - 3270.230: 99.9875% ( 9) 00:11:41.204 3270.230 - 3284.511: 99.9942% ( 7) 00:11:41.204 3284.511 - 3298.791: 99.9981% ( 4) 00:11:41.204 3598.681 - 3612.962: 99.9990% ( 1) 00:11:41.204 3798.608 - 3827.169: 100.0000% ( 1) 00:11:41.204 00:11:41.204 06:03:49 -- nvme/nvme.sh@23 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:41.463 EAL: TSC is not safe to use in SMP mode 00:11:41.464 EAL: TSC is not invariant 00:11:41.464 [2024-05-13 06:03:49.769535] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:42.854 Initializing NVMe Controllers 00:11:42.854 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:11:42.854 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:11:42.854 Initialization complete. Launching workers. 00:11:42.854 ======================================================== 00:11:42.854 Latency(us) 00:11:42.854 Device Information : IOPS MiB/s Average min max 00:11:42.854 PCIE (0000:00:06.0) NSID 1 from core 0: 92108.85 1079.40 1390.10 553.40 13506.94 00:11:42.854 ======================================================== 00:11:42.854 Total : 92108.85 1079.40 1390.10 553.40 13506.94 00:11:42.854 00:11:42.854 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:42.854 ================================================================================= 00:11:42.854 1.00000% : 856.829us 00:11:42.854 10.00000% : 1013.914us 00:11:42.854 25.00000% : 1142.439us 00:11:42.854 50.00000% : 1306.664us 00:11:42.854 75.00000% : 1542.292us 00:11:42.854 90.00000% : 1799.341us 00:11:42.854 95.00000% : 2013.548us 00:11:42.854 98.00000% : 2413.401us 00:11:42.854 99.00000% : 2813.255us 00:11:42.854 99.50000% : 3327.352us 00:11:42.854 99.90000% : 9425.118us 00:11:42.854 99.99000% : 13252.287us 00:11:42.854 99.99900% : 13537.897us 00:11:42.854 99.99990% : 13537.897us 00:11:42.854 99.99999% : 13537.897us 00:11:42.854 00:11:42.854 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:11:42.854 ============================================================================== 00:11:42.854 Range in us Cumulative IO count 00:11:42.854 553.369 - 556.939: 0.0011% ( 1) 00:11:42.854 585.500 - 589.070: 0.0022% ( 1) 00:11:42.854 606.920 - 610.491: 0.0033% ( 1) 00:11:42.855 642.622 - 646.192: 0.0098% ( 6) 00:11:42.855 667.613 - 671.183: 0.0109% ( 1) 00:11:42.855 678.323 - 681.893: 0.0130% ( 2) 00:11:42.855 681.893 - 685.463: 0.0141% ( 1) 00:11:42.855 685.463 - 689.033: 0.0163% ( 2) 00:11:42.855 689.033 - 692.603: 0.0185% ( 2) 00:11:42.855 692.603 - 696.173: 0.0250% ( 6) 00:11:42.855 696.173 - 699.744: 0.0261% ( 1) 00:11:42.855 699.744 - 703.314: 0.0282% ( 2) 00:11:42.855 703.314 - 706.884: 0.0315% ( 3) 00:11:42.855 706.884 - 710.454: 0.0380% ( 6) 00:11:42.855 710.454 - 714.024: 0.0445% ( 6) 00:11:42.855 714.024 - 717.594: 0.0488% ( 4) 00:11:42.855 717.594 - 721.164: 0.0521% ( 3) 00:11:42.855 721.164 - 724.734: 0.0543% ( 2) 00:11:42.855 724.734 - 728.305: 0.0640% ( 9) 00:11:42.855 728.305 - 731.875: 0.0727% ( 8) 00:11:42.855 731.875 - 735.445: 0.0868% ( 13) 00:11:42.855 735.445 - 739.015: 0.0912% ( 4) 00:11:42.855 742.585 - 746.155: 0.0966% ( 5) 00:11:42.855 746.155 - 749.725: 0.1010% ( 4) 00:11:42.855 749.725 - 753.295: 0.1075% ( 6) 00:11:42.855 753.295 - 756.866: 0.1129% ( 5) 00:11:42.855 756.866 - 760.436: 0.1292% ( 15) 00:11:42.855 760.436 - 764.006: 0.1531% ( 22) 00:11:42.855 764.006 - 767.576: 0.1628% ( 9) 00:11:42.855 767.576 - 771.146: 0.1683% ( 5) 00:11:42.855 771.146 - 774.716: 0.1748% ( 6) 00:11:42.855 774.716 - 778.286: 0.1813% ( 6) 00:11:42.855 778.286 - 781.856: 0.1878% ( 6) 00:11:42.855 781.856 - 785.427: 0.2030% ( 14) 00:11:42.855 785.427 - 788.997: 0.2280% ( 23) 00:11:42.855 788.997 - 792.567: 0.2529% ( 23) 00:11:42.855 792.567 - 796.137: 0.2736% ( 19) 00:11:42.855 796.137 - 799.707: 0.2953% ( 20) 00:11:42.855 799.707 - 803.277: 0.3191% ( 22) 00:11:42.855 803.277 - 806.847: 0.3626% ( 40) 00:11:42.855 806.847 - 810.417: 0.3821% ( 18) 00:11:42.855 810.417 - 813.987: 0.3984% ( 15) 00:11:42.855 813.987 - 817.558: 0.4190% ( 19) 00:11:42.855 817.558 - 821.128: 0.4483% ( 27) 00:11:42.855 821.128 - 824.698: 0.4831% ( 32) 00:11:42.855 824.698 - 828.268: 0.5417% ( 54) 00:11:42.855 828.268 - 831.838: 0.6068% ( 60) 00:11:42.855 831.838 - 835.408: 0.6622% ( 51) 00:11:42.855 835.408 - 838.978: 0.6958% ( 31) 00:11:42.855 838.978 - 842.548: 0.7566% ( 56) 00:11:42.855 842.548 - 846.119: 0.8228% ( 61) 00:11:42.855 846.119 - 849.689: 0.9075% ( 78) 00:11:42.855 849.689 - 853.259: 0.9770% ( 64) 00:11:42.855 853.259 - 856.829: 1.0649% ( 81) 00:11:42.855 856.829 - 860.399: 1.1594% ( 87) 00:11:42.855 860.399 - 863.969: 1.2701% ( 102) 00:11:42.855 863.969 - 867.539: 1.3710% ( 93) 00:11:42.855 867.539 - 871.109: 1.4383% ( 62) 00:11:42.855 871.109 - 874.680: 1.5588% ( 111) 00:11:42.855 874.680 - 878.250: 1.6891% ( 120) 00:11:42.855 878.250 - 881.820: 1.7814% ( 85) 00:11:42.855 881.820 - 885.390: 1.8867% ( 97) 00:11:42.855 885.390 - 888.960: 2.0148% ( 118) 00:11:42.855 888.960 - 892.530: 2.1450% ( 120) 00:11:42.855 892.530 - 896.100: 2.2861% ( 130) 00:11:42.855 896.100 - 899.670: 2.4240% ( 127) 00:11:42.855 899.670 - 903.240: 2.5977% ( 160) 00:11:42.855 903.240 - 906.811: 2.7356% ( 127) 00:11:42.855 906.811 - 910.381: 2.9082% ( 159) 00:11:42.855 910.381 - 913.951: 3.0710% ( 150) 00:11:42.855 913.951 - 921.091: 3.4205% ( 322) 00:11:42.855 921.091 - 928.231: 3.8461% ( 392) 00:11:42.855 928.231 - 935.372: 4.2846% ( 404) 00:11:42.855 935.372 - 942.512: 4.6765% ( 361) 00:11:42.855 942.512 - 949.652: 5.0923% ( 383) 00:11:42.855 949.652 - 956.792: 5.5536% ( 425) 00:11:42.855 956.792 - 963.933: 6.0443% ( 452) 00:11:42.855 963.933 - 971.073: 6.6576% ( 565) 00:11:42.855 971.073 - 978.213: 7.2438% ( 540) 00:11:42.855 978.213 - 985.353: 7.7985% ( 511) 00:11:42.855 985.353 - 992.494: 8.4846% ( 632) 00:11:42.855 992.494 - 999.634: 9.0176% ( 491) 00:11:42.855 999.634 - 1006.774: 9.6277% ( 562) 00:11:42.855 1006.774 - 1013.914: 10.2486% ( 572) 00:11:42.855 1013.914 - 1021.054: 10.8912% ( 592) 00:11:42.855 1021.054 - 1028.195: 11.5089% ( 569) 00:11:42.855 1028.195 - 1035.335: 11.9844% ( 438) 00:11:42.855 1035.335 - 1042.475: 12.6107% ( 577) 00:11:42.855 1042.475 - 1049.615: 13.3597% ( 690) 00:11:42.855 1049.615 - 1056.756: 14.1782% ( 754) 00:11:42.855 1056.756 - 1063.896: 15.0098% ( 766) 00:11:42.855 1063.896 - 1071.036: 15.9520% ( 868) 00:11:42.855 1071.036 - 1078.176: 17.0137% ( 978) 00:11:42.855 1078.176 - 1085.317: 17.9733% ( 884) 00:11:42.855 1085.317 - 1092.457: 18.8776% ( 833) 00:11:42.855 1092.457 - 1099.597: 19.8849% ( 928) 00:11:42.855 1099.597 - 1106.737: 20.9140% ( 948) 00:11:42.855 1106.737 - 1113.878: 21.8628% ( 874) 00:11:42.855 1113.878 - 1121.018: 22.8789% ( 936) 00:11:42.855 1121.018 - 1128.158: 24.0599% ( 1088) 00:11:42.855 1128.158 - 1135.298: 24.8838% ( 759) 00:11:42.855 1135.298 - 1142.439: 25.9368% ( 970) 00:11:42.855 1142.439 - 1149.579: 26.8650% ( 855) 00:11:42.855 1149.579 - 1156.719: 27.9668% ( 1015) 00:11:42.855 1156.719 - 1163.859: 28.9796% ( 933) 00:11:42.855 1163.859 - 1171.000: 29.8545% ( 806) 00:11:42.855 1171.000 - 1178.140: 30.7653% ( 839) 00:11:42.855 1178.140 - 1185.280: 31.9496% ( 1091) 00:11:42.855 1185.280 - 1192.420: 33.0667% ( 1029) 00:11:42.855 1192.420 - 1199.560: 34.2629% ( 1102) 00:11:42.855 1199.560 - 1206.701: 35.3854% ( 1034) 00:11:42.855 1206.701 - 1213.841: 36.4904% ( 1018) 00:11:42.855 1213.841 - 1220.981: 37.5141% ( 943) 00:11:42.855 1220.981 - 1228.121: 38.6463% ( 1043) 00:11:42.855 1228.121 - 1235.262: 39.9294% ( 1182) 00:11:42.855 1235.262 - 1242.402: 41.1344% ( 1110) 00:11:42.855 1242.402 - 1249.542: 42.3697% ( 1138) 00:11:42.855 1249.542 - 1256.682: 43.5052% ( 1046) 00:11:42.855 1256.682 - 1263.823: 44.4551% ( 875) 00:11:42.855 1263.823 - 1270.963: 45.4559% ( 922) 00:11:42.855 1270.963 - 1278.103: 46.3048% ( 782) 00:11:42.855 1278.103 - 1285.243: 47.1884% ( 814) 00:11:42.855 1285.243 - 1292.384: 48.0406% ( 785) 00:11:42.855 1292.384 - 1299.524: 48.9872% ( 872) 00:11:42.855 1299.524 - 1306.664: 50.0195% ( 951) 00:11:42.855 1306.664 - 1313.804: 50.9021% ( 813) 00:11:42.855 1313.804 - 1320.945: 51.8411% ( 865) 00:11:42.855 1320.945 - 1328.085: 52.7019% ( 793) 00:11:42.855 1328.085 - 1335.225: 53.4933% ( 729) 00:11:42.855 1335.225 - 1342.365: 54.4985% ( 926) 00:11:42.855 1342.365 - 1349.506: 55.4928% ( 916) 00:11:42.855 1349.506 - 1356.646: 56.5295% ( 955) 00:11:42.855 1356.646 - 1363.786: 57.5087% ( 902) 00:11:42.855 1363.786 - 1370.926: 58.3945% ( 816) 00:11:42.855 1370.926 - 1378.067: 59.3595% ( 889) 00:11:42.855 1378.067 - 1385.207: 60.0923% ( 675) 00:11:42.855 1385.207 - 1392.347: 60.9770% ( 815) 00:11:42.855 1392.347 - 1399.487: 61.8085% ( 766) 00:11:42.855 1399.487 - 1406.627: 62.6813% ( 804) 00:11:42.855 1406.627 - 1413.768: 63.5161% ( 769) 00:11:42.855 1413.768 - 1420.908: 64.3139% ( 735) 00:11:42.855 1420.908 - 1428.048: 65.1596% ( 779) 00:11:42.855 1428.048 - 1435.188: 65.7251% ( 521) 00:11:42.855 1435.188 - 1442.329: 66.2755% ( 507) 00:11:42.855 1442.329 - 1449.469: 66.8400% ( 520) 00:11:42.855 1449.469 - 1456.609: 67.3871% ( 504) 00:11:42.855 1456.609 - 1463.749: 68.0667% ( 626) 00:11:42.855 1463.749 - 1470.890: 68.8124% ( 687) 00:11:42.855 1470.890 - 1478.030: 69.5430% ( 673) 00:11:42.855 1478.030 - 1485.170: 70.1987% ( 604) 00:11:42.855 1485.170 - 1492.310: 70.8446% ( 595) 00:11:42.855 1492.310 - 1499.451: 71.6359% ( 729) 00:11:42.855 1499.451 - 1506.591: 72.3263% ( 636) 00:11:42.855 1506.591 - 1513.731: 72.8550% ( 487) 00:11:42.855 1513.731 - 1520.871: 73.4705% ( 567) 00:11:42.855 1520.871 - 1528.012: 74.0719% ( 554) 00:11:42.855 1528.012 - 1535.152: 74.7221% ( 599) 00:11:42.855 1535.152 - 1542.292: 75.3908% ( 616) 00:11:42.855 1542.292 - 1549.432: 75.9455% ( 511) 00:11:42.855 1549.432 - 1556.573: 76.5219% ( 531) 00:11:42.855 1556.573 - 1563.713: 77.0495% ( 486) 00:11:42.855 1563.713 - 1570.853: 77.5195% ( 433) 00:11:42.855 1570.853 - 1577.993: 78.0645% ( 502) 00:11:42.855 1577.993 - 1585.134: 78.6344% ( 525) 00:11:42.855 1585.134 - 1592.274: 79.1609% ( 485) 00:11:42.855 1592.274 - 1599.414: 79.7243% ( 519) 00:11:42.855 1599.414 - 1606.554: 80.2225% ( 459) 00:11:42.855 1606.554 - 1613.694: 80.7251% ( 463) 00:11:42.855 1613.694 - 1620.835: 81.1572% ( 398) 00:11:42.855 1620.835 - 1627.975: 81.7238% ( 522) 00:11:42.855 1627.975 - 1635.115: 82.1418% ( 385) 00:11:42.855 1635.115 - 1642.255: 82.4859% ( 317) 00:11:42.855 1642.255 - 1649.396: 82.8191% ( 307) 00:11:42.855 1649.396 - 1656.536: 83.2110% ( 361) 00:11:42.855 1656.536 - 1663.676: 83.6333% ( 389) 00:11:42.855 1663.676 - 1670.816: 84.0990% ( 429) 00:11:42.855 1670.816 - 1677.957: 84.4865% ( 357) 00:11:42.855 1677.957 - 1685.097: 84.7970% ( 286) 00:11:42.855 1685.097 - 1692.237: 85.1541% ( 329) 00:11:42.855 1692.237 - 1699.377: 85.5699% ( 383) 00:11:42.855 1699.377 - 1706.518: 86.0030% ( 399) 00:11:42.855 1706.518 - 1713.658: 86.3678% ( 336) 00:11:42.855 1713.658 - 1720.798: 86.6674% ( 276) 00:11:42.855 1720.798 - 1727.938: 86.9399% ( 251) 00:11:42.855 1727.938 - 1735.079: 87.3220% ( 352) 00:11:42.855 1735.079 - 1742.219: 87.7149% ( 362) 00:11:42.855 1742.219 - 1749.359: 88.0775% ( 334) 00:11:42.855 1749.359 - 1756.499: 88.4585% ( 351) 00:11:42.855 1756.499 - 1763.640: 88.7104% ( 232) 00:11:42.856 1763.640 - 1770.780: 88.9677% ( 237) 00:11:42.856 1770.780 - 1777.920: 89.3487% ( 351) 00:11:42.856 1777.920 - 1785.060: 89.6157% ( 246) 00:11:42.856 1785.060 - 1792.200: 89.9023% ( 264) 00:11:42.856 1792.200 - 1799.341: 90.1368% ( 216) 00:11:42.856 1799.341 - 1806.481: 90.3680% ( 213) 00:11:42.856 1806.481 - 1813.621: 90.5710% ( 187) 00:11:42.856 1813.621 - 1820.761: 90.8706% ( 276) 00:11:42.856 1820.761 - 1827.902: 91.2039% ( 307) 00:11:42.856 1827.902 - 1842.182: 91.7401% ( 494) 00:11:42.856 1842.182 - 1856.463: 92.1624% ( 389) 00:11:42.856 1856.463 - 1870.743: 92.5977% ( 401) 00:11:42.856 1870.743 - 1885.024: 93.0558% ( 422) 00:11:42.856 1885.024 - 1899.304: 93.4412% ( 355) 00:11:42.856 1899.304 - 1913.585: 93.7060% ( 244) 00:11:42.856 1913.585 - 1927.865: 93.9535% ( 228) 00:11:42.856 1927.865 - 1942.146: 94.2162% ( 242) 00:11:42.856 1942.146 - 1956.426: 94.4344% ( 201) 00:11:42.856 1956.426 - 1970.707: 94.6852% ( 231) 00:11:42.856 1970.707 - 1984.987: 94.8415% ( 144) 00:11:42.856 1984.987 - 1999.267: 94.9859% ( 133) 00:11:42.856 1999.267 - 2013.548: 95.1639% ( 164) 00:11:42.856 2013.548 - 2027.828: 95.2985% ( 124) 00:11:42.856 2027.828 - 2042.109: 95.4635% ( 152) 00:11:42.856 2042.109 - 2056.389: 95.6090% ( 134) 00:11:42.856 2056.389 - 2070.670: 95.7772% ( 155) 00:11:42.856 2070.670 - 2084.950: 95.9108% ( 123) 00:11:42.856 2084.950 - 2099.231: 96.0845% ( 160) 00:11:42.856 2099.231 - 2113.511: 96.2169% ( 122) 00:11:42.856 2113.511 - 2127.792: 96.3026% ( 79) 00:11:42.856 2127.792 - 2142.072: 96.4123% ( 101) 00:11:42.856 2142.072 - 2156.353: 96.4818% ( 64) 00:11:42.856 2156.353 - 2170.633: 96.6109% ( 119) 00:11:42.856 2170.633 - 2184.914: 96.7477% ( 126) 00:11:42.856 2184.914 - 2199.194: 96.8671% ( 110) 00:11:42.856 2199.194 - 2213.475: 97.0017% ( 124) 00:11:42.856 2213.475 - 2227.755: 97.1472% ( 134) 00:11:42.856 2227.755 - 2242.036: 97.2807% ( 123) 00:11:42.856 2242.036 - 2256.316: 97.4294% ( 137) 00:11:42.856 2256.316 - 2270.597: 97.5195% ( 83) 00:11:42.856 2270.597 - 2284.877: 97.5955% ( 70) 00:11:42.856 2284.877 - 2299.158: 97.6748% ( 73) 00:11:42.856 2299.158 - 2313.438: 97.7193% ( 41) 00:11:42.856 2313.438 - 2327.719: 97.7464% ( 25) 00:11:42.856 2327.719 - 2341.999: 97.7909% ( 41) 00:11:42.856 2341.999 - 2356.280: 97.8354% ( 41) 00:11:42.856 2356.280 - 2370.560: 97.8799% ( 41) 00:11:42.856 2370.560 - 2384.840: 97.9244% ( 41) 00:11:42.856 2384.840 - 2399.121: 97.9722% ( 44) 00:11:42.856 2399.121 - 2413.401: 98.0015% ( 27) 00:11:42.856 2413.401 - 2427.682: 98.0417% ( 37) 00:11:42.856 2427.682 - 2441.962: 98.1036% ( 57) 00:11:42.856 2441.962 - 2456.243: 98.1882% ( 78) 00:11:42.856 2456.243 - 2470.523: 98.2273% ( 36) 00:11:42.856 2470.523 - 2484.804: 98.2740% ( 43) 00:11:42.856 2484.804 - 2499.084: 98.3554% ( 75) 00:11:42.856 2499.084 - 2513.365: 98.4129% ( 53) 00:11:42.856 2513.365 - 2527.645: 98.4477% ( 32) 00:11:42.856 2527.645 - 2541.926: 98.4944% ( 43) 00:11:42.856 2541.926 - 2556.206: 98.5128% ( 17) 00:11:42.856 2556.206 - 2570.487: 98.5769% ( 59) 00:11:42.856 2570.487 - 2584.767: 98.6149% ( 35) 00:11:42.856 2584.767 - 2599.048: 98.6539% ( 36) 00:11:42.856 2599.048 - 2613.328: 98.6822% ( 26) 00:11:42.856 2613.328 - 2627.609: 98.7115% ( 27) 00:11:42.856 2627.609 - 2641.889: 98.7375% ( 24) 00:11:42.856 2641.889 - 2656.170: 98.7484% ( 10) 00:11:42.856 2656.170 - 2670.450: 98.7647% ( 15) 00:11:42.856 2670.450 - 2684.731: 98.7994% ( 32) 00:11:42.856 2684.731 - 2699.011: 98.8244% ( 23) 00:11:42.856 2699.011 - 2713.292: 98.8548% ( 28) 00:11:42.856 2713.292 - 2727.572: 98.8732% ( 17) 00:11:42.856 2727.572 - 2741.853: 98.8993% ( 24) 00:11:42.856 2741.853 - 2756.133: 98.9318% ( 30) 00:11:42.856 2756.133 - 2770.414: 98.9503% ( 17) 00:11:42.856 2770.414 - 2784.694: 98.9687% ( 17) 00:11:42.856 2784.694 - 2798.974: 98.9948% ( 24) 00:11:42.856 2798.974 - 2813.255: 99.0339% ( 36) 00:11:42.856 2813.255 - 2827.535: 99.0881% ( 50) 00:11:42.856 2827.535 - 2841.816: 99.1251% ( 34) 00:11:42.856 2841.816 - 2856.096: 99.1522% ( 25) 00:11:42.856 2856.096 - 2870.377: 99.1913% ( 36) 00:11:42.856 2870.377 - 2884.657: 99.2304% ( 36) 00:11:42.856 2884.657 - 2898.938: 99.2629% ( 30) 00:11:42.856 2898.938 - 2913.218: 99.3009% ( 35) 00:11:42.856 2913.218 - 2927.499: 99.3161% ( 14) 00:11:42.856 2927.499 - 2941.779: 99.3367% ( 19) 00:11:42.856 2941.779 - 2956.060: 99.3465% ( 9) 00:11:42.856 2956.060 - 2970.340: 99.3628% ( 15) 00:11:42.856 2984.621 - 2998.901: 99.3650% ( 2) 00:11:42.856 2998.901 - 3013.182: 99.3660% ( 1) 00:11:42.856 3013.182 - 3027.462: 99.3693% ( 3) 00:11:42.856 3027.462 - 3041.743: 99.3747% ( 5) 00:11:42.856 3070.304 - 3084.584: 99.3899% ( 14) 00:11:42.856 3084.584 - 3098.865: 99.3986% ( 8) 00:11:42.856 3098.865 - 3113.145: 99.4040% ( 5) 00:11:42.856 3113.145 - 3127.426: 99.4127% ( 8) 00:11:42.856 3127.426 - 3141.706: 99.4182% ( 5) 00:11:42.856 3170.267 - 3184.547: 99.4192% ( 1) 00:11:42.856 3184.547 - 3198.828: 99.4225% ( 3) 00:11:42.856 3198.828 - 3213.108: 99.4301% ( 7) 00:11:42.856 3213.108 - 3227.389: 99.4344% ( 4) 00:11:42.856 3227.389 - 3241.669: 99.4442% ( 9) 00:11:42.856 3241.669 - 3255.950: 99.4464% ( 2) 00:11:42.856 3255.950 - 3270.230: 99.4518% ( 5) 00:11:42.856 3270.230 - 3284.511: 99.4529% ( 1) 00:11:42.856 3284.511 - 3298.791: 99.4692% ( 15) 00:11:42.856 3298.791 - 3313.072: 99.4844% ( 14) 00:11:42.856 3313.072 - 3327.352: 99.5039% ( 18) 00:11:42.856 3327.352 - 3341.633: 99.5202% ( 15) 00:11:42.856 3341.633 - 3355.913: 99.5245% ( 4) 00:11:42.856 3355.913 - 3370.194: 99.5354% ( 10) 00:11:42.856 3370.194 - 3384.474: 99.5386% ( 3) 00:11:42.856 3384.474 - 3398.755: 99.5452% ( 6) 00:11:42.856 3398.755 - 3413.035: 99.5517% ( 6) 00:11:42.856 3413.035 - 3427.316: 99.5636% ( 11) 00:11:42.856 3427.316 - 3441.596: 99.5680% ( 4) 00:11:42.856 3441.596 - 3455.877: 99.5810% ( 12) 00:11:42.856 3455.877 - 3470.157: 99.5929% ( 11) 00:11:42.856 3470.157 - 3484.438: 99.6016% ( 8) 00:11:42.856 3484.438 - 3498.718: 99.6070% ( 5) 00:11:42.856 3498.718 - 3512.999: 99.6157% ( 8) 00:11:42.856 3512.999 - 3527.279: 99.6255% ( 9) 00:11:42.856 3527.279 - 3541.560: 99.6331% ( 7) 00:11:42.856 3541.560 - 3555.840: 99.6450% ( 11) 00:11:42.856 3555.840 - 3570.121: 99.6515% ( 6) 00:11:42.856 3570.121 - 3584.401: 99.6581% ( 6) 00:11:42.856 3584.401 - 3598.681: 99.6613% ( 3) 00:11:42.856 3598.681 - 3612.962: 99.6657% ( 4) 00:11:42.856 3612.962 - 3627.242: 99.6689% ( 3) 00:11:42.856 3627.242 - 3641.523: 99.6722% ( 3) 00:11:42.856 3641.523 - 3655.803: 99.6830% ( 10) 00:11:42.856 3655.803 - 3684.364: 99.6906% ( 7) 00:11:42.856 3712.925 - 3741.486: 99.6928% ( 2) 00:11:42.856 3741.486 - 3770.047: 99.6939% ( 1) 00:11:42.856 3770.047 - 3798.608: 99.6971% ( 3) 00:11:42.856 3798.608 - 3827.169: 99.7058% ( 8) 00:11:42.856 3827.169 - 3855.730: 99.7188% ( 12) 00:11:42.856 3855.730 - 3884.291: 99.7199% ( 1) 00:11:42.856 3884.291 - 3912.852: 99.7221% ( 2) 00:11:42.856 3912.852 - 3941.413: 99.7319% ( 9) 00:11:42.856 3941.413 - 3969.974: 99.7330% ( 1) 00:11:42.856 3969.974 - 3998.535: 99.7373% ( 4) 00:11:42.856 3998.535 - 4027.096: 99.7406% ( 3) 00:11:42.856 4027.096 - 4055.657: 99.7438% ( 3) 00:11:42.856 4055.657 - 4084.218: 99.7460% ( 2) 00:11:42.856 4084.218 - 4112.779: 99.7579% ( 11) 00:11:42.856 4112.779 - 4141.340: 99.7590% ( 1) 00:11:42.856 4198.462 - 4227.023: 99.7677% ( 8) 00:11:42.856 4369.828 - 4398.388: 99.7807% ( 12) 00:11:42.856 4398.388 - 4426.949: 99.7861% ( 5) 00:11:42.856 4426.949 - 4455.510: 99.7927% ( 6) 00:11:42.856 4455.510 - 4484.071: 99.7937% ( 1) 00:11:42.856 4484.071 - 4512.632: 99.7948% ( 1) 00:11:42.856 4512.632 - 4541.193: 99.8003% ( 5) 00:11:42.856 4598.315 - 4626.876: 99.8013% ( 1) 00:11:42.856 4626.876 - 4655.437: 99.8046% ( 3) 00:11:42.856 4712.559 - 4741.120: 99.8068% ( 2) 00:11:42.856 4769.681 - 4798.242: 99.8122% ( 5) 00:11:42.856 4855.364 - 4883.925: 99.8133% ( 1) 00:11:42.856 4883.925 - 4912.486: 99.8144% ( 1) 00:11:42.856 4912.486 - 4941.047: 99.8176% ( 3) 00:11:42.856 4941.047 - 4969.608: 99.8241% ( 6) 00:11:42.856 4969.608 - 4998.169: 99.8285% ( 4) 00:11:42.856 5169.534 - 5198.095: 99.8328% ( 4) 00:11:42.856 5255.217 - 5283.778: 99.8339% ( 1) 00:11:42.856 5283.778 - 5312.339: 99.8372% ( 3) 00:11:42.856 5626.510 - 5655.071: 99.8383% ( 1) 00:11:42.856 5740.754 - 5769.315: 99.8393% ( 1) 00:11:42.856 6254.851 - 6283.412: 99.8404% ( 1) 00:11:42.856 6311.973 - 6340.534: 99.8415% ( 1) 00:11:42.856 6569.022 - 6597.583: 99.8448% ( 3) 00:11:42.856 6797.509 - 6826.070: 99.8459% ( 1) 00:11:42.856 6968.875 - 6997.436: 99.8469% ( 1) 00:11:42.856 7368.729 - 7425.851: 99.8480% ( 1) 00:11:42.856 7768.582 - 7825.704: 99.8491% ( 1) 00:11:42.856 7939.948 - 7997.070: 99.8502% ( 1) 00:11:42.856 8282.680 - 8339.802: 99.8513% ( 1) 00:11:42.856 8339.802 - 8396.923: 99.8524% ( 1) 00:11:42.856 8739.655 - 8796.777: 99.8535% ( 1) 00:11:42.856 8796.777 - 8853.899: 99.8545% ( 1) 00:11:42.856 8911.021 - 8968.143: 99.8567% ( 2) 00:11:42.856 8968.143 - 9025.265: 99.8611% ( 4) 00:11:42.856 9025.265 - 9082.387: 99.8708% ( 9) 00:11:42.856 9082.387 - 9139.509: 99.8752% ( 4) 00:11:42.856 9139.509 - 9196.630: 99.8773% ( 2) 00:11:42.856 9196.630 - 9253.752: 99.8784% ( 1) 00:11:42.856 9253.752 - 9310.874: 99.8925% ( 13) 00:11:42.856 9310.874 - 9367.996: 99.8980% ( 5) 00:11:42.856 9367.996 - 9425.118: 99.9001% ( 2) 00:11:42.856 9482.240 - 9539.362: 99.9056% ( 5) 00:11:42.857 9539.362 - 9596.484: 99.9066% ( 1) 00:11:42.857 9596.484 - 9653.606: 99.9088% ( 2) 00:11:42.857 9653.606 - 9710.728: 99.9121% ( 3) 00:11:42.857 9710.728 - 9767.850: 99.9186% ( 6) 00:11:42.857 9767.850 - 9824.972: 99.9208% ( 2) 00:11:42.857 9824.972 - 9882.094: 99.9240% ( 3) 00:11:42.857 9882.094 - 9939.215: 99.9251% ( 1) 00:11:42.857 9939.215 - 9996.337: 99.9273% ( 2) 00:11:42.857 9996.337 - 10053.459: 99.9284% ( 1) 00:11:42.857 10053.459 - 10110.581: 99.9305% ( 2) 00:11:42.857 10224.825 - 10281.947: 99.9316% ( 1) 00:11:42.857 10281.947 - 10339.069: 99.9338% ( 2) 00:11:42.857 10453.313 - 10510.435: 99.9349% ( 1) 00:11:42.857 10510.435 - 10567.557: 99.9360% ( 1) 00:11:42.857 10624.679 - 10681.801: 99.9370% ( 1) 00:11:42.857 10738.922 - 10796.044: 99.9392% ( 2) 00:11:42.857 10796.044 - 10853.166: 99.9403% ( 1) 00:11:42.857 10853.166 - 10910.288: 99.9425% ( 2) 00:11:42.857 10910.288 - 10967.410: 99.9446% ( 2) 00:11:42.857 10967.410 - 11024.532: 99.9457% ( 1) 00:11:42.857 11024.532 - 11081.654: 99.9468% ( 1) 00:11:42.857 11195.898 - 11253.020: 99.9479% ( 1) 00:11:42.857 11310.142 - 11367.264: 99.9490% ( 1) 00:11:42.857 11367.264 - 11424.386: 99.9501% ( 1) 00:11:42.857 11424.386 - 11481.508: 99.9512% ( 1) 00:11:43.793 11481.508 - 11538.629: 99.9555% ( 4) 00:11:43.793 11538.629 - 11595.751: 99.9566% ( 1) 00:11:43.793 11595.751 - 11652.873: 99.9577% ( 1) 00:11:43.793 11709.995 - 11767.117: 99.9587% ( 1) 00:11:43.793 11767.117 - 11824.239: 99.9598% ( 1) 00:11:43.793 11881.361 - 11938.483: 99.9609% ( 1) 00:11:43.793 11938.483 - 11995.605: 99.9631% ( 2) 00:11:43.793 11995.605 - 12052.727: 99.9642% ( 1) 00:11:43.793 12052.727 - 12109.849: 99.9653% ( 1) 00:11:43.793 12166.971 - 12224.093: 99.9696% ( 4) 00:11:43.793 12338.336 - 12395.458: 99.9718% ( 2) 00:11:43.793 12452.580 - 12509.702: 99.9739% ( 2) 00:11:43.793 12509.702 - 12566.824: 99.9750% ( 1) 00:11:43.793 12566.824 - 12623.946: 99.9783% ( 3) 00:11:43.793 12623.946 - 12681.068: 99.9794% ( 1) 00:11:43.793 12681.068 - 12738.190: 99.9826% ( 3) 00:11:43.793 12795.312 - 12852.434: 99.9859% ( 3) 00:11:43.793 12909.556 - 12966.678: 99.9870% ( 1) 00:11:43.793 12966.678 - 13023.800: 99.9881% ( 1) 00:11:43.793 13080.922 - 13138.043: 99.9891% ( 1) 00:11:43.793 13195.165 - 13252.287: 99.9913% ( 2) 00:11:43.793 13252.287 - 13309.409: 99.9935% ( 2) 00:11:43.793 13309.409 - 13366.531: 99.9957% ( 2) 00:11:43.793 13366.531 - 13423.653: 99.9967% ( 1) 00:11:43.793 13423.653 - 13480.775: 99.9989% ( 2) 00:11:43.793 13480.775 - 13537.897: 100.0000% ( 1) 00:11:43.793 00:11:43.793 06:03:51 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:43.793 00:11:43.793 real 0m3.976s 00:11:43.793 user 0m3.047s 00:11:43.793 sys 0m0.926s 00:11:43.793 06:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.793 06:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 ************************************ 00:11:43.793 END TEST nvme_perf 00:11:43.793 ************************************ 00:11:43.793 06:03:51 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:43.793 06:03:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:43.793 06:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.793 06:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:43.793 ************************************ 00:11:43.793 START TEST nvme_hello_world 00:11:43.793 ************************************ 00:11:43.793 06:03:51 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:44.363 EAL: TSC is not safe to use in SMP mode 00:11:44.363 EAL: TSC is not invariant 00:11:44.363 [2024-05-13 06:03:52.619438] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:44.363 Initializing NVMe Controllers 00:11:44.363 Attaching to 0000:00:06.0 00:11:44.363 Attached to 0000:00:06.0 00:11:44.363 Namespace ID: 1 size: 5GB 00:11:44.363 Initialization complete. 00:11:44.363 INFO: using host memory buffer for IO 00:11:44.363 Hello world! 00:11:44.363 00:11:44.363 real 0m0.787s 00:11:44.363 user 0m0.017s 00:11:44.363 sys 0m0.769s 00:11:44.363 06:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.363 06:03:52 -- common/autotest_common.sh@10 -- # set +x 00:11:44.363 ************************************ 00:11:44.363 END TEST nvme_hello_world 00:11:44.363 ************************************ 00:11:44.622 06:03:52 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:44.622 06:03:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:44.622 06:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.622 06:03:52 -- common/autotest_common.sh@10 -- # set +x 00:11:44.622 ************************************ 00:11:44.622 START TEST nvme_sgl 00:11:44.622 ************************************ 00:11:44.622 06:03:52 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:44.878 EAL: TSC is not safe to use in SMP mode 00:11:44.879 EAL: TSC is not invariant 00:11:44.879 [2024-05-13 06:03:53.161659] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:44.879 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:11:44.879 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:11:44.879 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:11:44.879 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:11:44.879 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:11:44.879 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:11:45.135 NVMe Readv/Writev Request test 00:11:45.135 Attaching to 0000:00:06.0 00:11:45.135 Attached to 0000:00:06.0 00:11:45.135 0000:00:06.0: build_io_request_2 test passed 00:11:45.135 0000:00:06.0: build_io_request_4 test passed 00:11:45.135 0000:00:06.0: build_io_request_5 test passed 00:11:45.135 0000:00:06.0: build_io_request_6 test passed 00:11:45.135 0000:00:06.0: build_io_request_7 test passed 00:11:45.135 0000:00:06.0: build_io_request_10 test passed 00:11:45.135 Cleaning up... 00:11:45.135 00:11:45.135 real 0m0.491s 00:11:45.135 user 0m0.024s 00:11:45.135 sys 0m0.467s 00:11:45.135 06:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.135 06:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 ************************************ 00:11:45.135 END TEST nvme_sgl 00:11:45.135 ************************************ 00:11:45.135 06:03:53 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:45.135 06:03:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:45.136 06:03:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.136 06:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.136 ************************************ 00:11:45.136 START TEST nvme_e2edp 00:11:45.136 ************************************ 00:11:45.136 06:03:53 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:45.701 EAL: TSC is not safe to use in SMP mode 00:11:45.701 EAL: TSC is not invariant 00:11:45.701 [2024-05-13 06:03:54.006526] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:45.959 NVMe Write/Read with End-to-End data protection test 00:11:45.959 Attaching to 0000:00:06.0 00:11:45.959 Attached to 0000:00:06.0 00:11:45.959 Cleaning up... 00:11:45.959 00:11:45.959 real 0m0.788s 00:11:45.959 user 0m0.008s 00:11:45.959 sys 0m0.780s 00:11:45.959 06:03:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.959 06:03:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.959 ************************************ 00:11:45.959 END TEST nvme_e2edp 00:11:45.959 ************************************ 00:11:45.960 06:03:54 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:45.960 06:03:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:45.960 06:03:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.960 06:03:54 -- common/autotest_common.sh@10 -- # set +x 00:11:45.960 ************************************ 00:11:45.960 START TEST nvme_reserve 00:11:45.960 ************************************ 00:11:45.960 06:03:54 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:46.218 EAL: TSC is not safe to use in SMP mode 00:11:46.218 EAL: TSC is not invariant 00:11:46.478 [2024-05-13 06:03:54.539430] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:46.478 ===================================================== 00:11:46.478 NVMe Controller at PCI bus 0, device 6, function 0 00:11:46.478 ===================================================== 00:11:46.478 Reservations: Not Supported 00:11:46.478 Reservation test passed 00:11:46.478 00:11:46.478 real 0m0.481s 00:11:46.478 user 0m0.008s 00:11:46.478 sys 0m0.473s 00:11:46.478 06:03:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.478 06:03:54 -- common/autotest_common.sh@10 -- # set +x 00:11:46.478 ************************************ 00:11:46.478 END TEST nvme_reserve 00:11:46.478 ************************************ 00:11:46.478 06:03:54 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:46.478 06:03:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:46.478 06:03:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.478 06:03:54 -- common/autotest_common.sh@10 -- # set +x 00:11:46.478 ************************************ 00:11:46.478 START TEST nvme_err_injection 00:11:46.478 ************************************ 00:11:46.478 06:03:54 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:47.046 EAL: TSC is not safe to use in SMP mode 00:11:47.046 EAL: TSC is not invariant 00:11:47.046 [2024-05-13 06:03:55.362936] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:47.304 NVMe Error Injection test 00:11:47.304 Attaching to 0000:00:06.0 00:11:47.304 Attached to 0000:00:06.0 00:11:47.304 0000:00:06.0: get features failed as expected 00:11:47.304 0000:00:06.0: get features successfully as expected 00:11:47.304 0000:00:06.0: read failed as expected 00:11:47.304 0000:00:06.0: read successfully as expected 00:11:47.304 Cleaning up... 00:11:47.304 00:11:47.304 real 0m0.789s 00:11:47.304 user 0m0.023s 00:11:47.304 sys 0m0.765s 00:11:47.304 06:03:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.304 06:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:47.304 ************************************ 00:11:47.304 END TEST nvme_err_injection 00:11:47.304 ************************************ 00:11:47.304 06:03:55 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:47.304 06:03:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:47.304 06:03:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.304 06:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:47.304 ************************************ 00:11:47.304 START TEST nvme_overhead 00:11:47.304 ************************************ 00:11:47.304 06:03:55 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:47.873 EAL: TSC is not safe to use in SMP mode 00:11:47.873 EAL: TSC is not invariant 00:11:47.873 [2024-05-13 06:03:55.905276] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:48.808 Initializing NVMe Controllers 00:11:48.808 Attaching to 0000:00:06.0 00:11:48.808 Attached to 0000:00:06.0 00:11:48.808 Initialization complete. Launching workers. 00:11:48.808 submit (in ns) avg, min, max = 8690.8, 4709.3, 205389.1 00:11:48.808 complete (in ns) avg, min, max = 10137.5, 4066.9, 55269.8 00:11:48.808 00:11:48.808 Submit histogram 00:11:48.808 ================ 00:11:48.808 Range in us Cumulative Count 00:11:48.808 4.686 - 4.714: 0.0135% ( 1) 00:11:48.808 4.742 - 4.769: 0.0271% ( 1) 00:11:48.808 4.937 - 4.965: 0.0406% ( 1) 00:11:48.808 5.104 - 5.132: 0.0541% ( 1) 00:11:48.808 5.606 - 5.634: 0.0677% ( 1) 00:11:48.808 6.555 - 6.582: 0.0812% ( 1) 00:11:48.808 6.694 - 6.722: 0.0947% ( 1) 00:11:48.808 6.973 - 7.001: 0.1083% ( 1) 00:11:48.808 7.001 - 7.029: 0.1218% ( 1) 00:11:48.808 7.029 - 7.057: 0.1759% ( 4) 00:11:48.808 7.057 - 7.084: 0.2300% ( 4) 00:11:48.808 7.084 - 7.112: 0.3112% ( 6) 00:11:48.808 7.112 - 7.140: 0.3924% ( 6) 00:11:48.808 7.140 - 7.196: 0.8931% ( 37) 00:11:48.808 7.196 - 7.252: 1.7185% ( 61) 00:11:48.808 7.252 - 7.308: 2.9635% ( 92) 00:11:48.808 7.308 - 7.363: 4.8173% ( 137) 00:11:48.808 7.363 - 7.419: 7.2260% ( 178) 00:11:48.808 7.419 - 7.475: 10.0541% ( 209) 00:11:48.808 7.475 - 7.531: 13.1258% ( 227) 00:11:48.808 7.531 - 7.587: 16.9283% ( 281) 00:11:48.808 7.587 - 7.642: 20.8254% ( 288) 00:11:48.808 7.642 - 7.698: 24.8850% ( 300) 00:11:48.808 7.698 - 7.754: 28.5792% ( 273) 00:11:48.808 7.754 - 7.810: 32.4493% ( 286) 00:11:48.808 7.810 - 7.865: 35.6157% ( 234) 00:11:48.808 7.865 - 7.921: 38.7415% ( 231) 00:11:48.808 7.921 - 7.977: 41.5697% ( 209) 00:11:48.808 7.977 - 8.033: 43.9513% ( 176) 00:11:48.808 8.033 - 8.089: 45.9405% ( 147) 00:11:48.808 8.089 - 8.144: 47.9973% ( 152) 00:11:48.808 8.144 - 8.200: 49.9594% ( 145) 00:11:48.808 8.200 - 8.256: 51.4885% ( 113) 00:11:48.808 8.256 - 8.312: 53.0853% ( 118) 00:11:48.808 8.312 - 8.367: 54.6955% ( 119) 00:11:48.808 8.367 - 8.423: 56.2382% ( 114) 00:11:48.808 8.423 - 8.479: 57.4154% ( 87) 00:11:48.808 8.479 - 8.535: 58.7145% ( 96) 00:11:48.808 8.535 - 8.591: 60.0677% ( 100) 00:11:48.808 8.591 - 8.646: 61.2855% ( 90) 00:11:48.808 8.646 - 8.702: 62.5034% ( 90) 00:11:48.808 8.702 - 8.758: 63.7348% ( 91) 00:11:48.808 8.758 - 8.814: 65.0744% ( 99) 00:11:48.808 8.814 - 8.870: 66.1434% ( 79) 00:11:48.808 8.870 - 8.925: 67.1313% ( 73) 00:11:48.808 8.925 - 8.981: 68.1461% ( 75) 00:11:48.808 8.981 - 9.037: 69.4317% ( 95) 00:11:48.808 9.037 - 9.093: 71.0419% ( 119) 00:11:48.808 9.093 - 9.148: 73.7212% ( 198) 00:11:48.808 9.148 - 9.204: 76.8471% ( 231) 00:11:48.808 9.204 - 9.260: 79.7158% ( 212) 00:11:48.808 9.260 - 9.316: 82.3410% ( 194) 00:11:48.808 9.316 - 9.372: 84.1137% ( 131) 00:11:48.808 9.372 - 9.427: 85.1827% ( 79) 00:11:48.808 9.427 - 9.483: 86.1434% ( 71) 00:11:48.808 9.483 - 9.539: 86.9959% ( 63) 00:11:48.808 9.539 - 9.595: 87.7808% ( 58) 00:11:48.808 9.595 - 9.650: 88.4980% ( 53) 00:11:48.808 9.650 - 9.706: 88.9175% ( 31) 00:11:48.808 9.706 - 9.762: 89.4046% ( 36) 00:11:48.808 9.762 - 9.818: 89.9188% ( 38) 00:11:48.808 9.818 - 9.874: 90.3383% ( 31) 00:11:48.808 9.874 - 9.929: 90.7848% ( 33) 00:11:48.808 9.929 - 9.985: 91.1637% ( 28) 00:11:48.808 9.985 - 10.041: 91.4479% ( 21) 00:11:48.808 10.041 - 10.097: 92.0027% ( 41) 00:11:48.808 10.097 - 10.153: 92.4493% ( 33) 00:11:48.808 10.153 - 10.208: 92.8011% ( 26) 00:11:48.808 10.208 - 10.264: 93.1664% ( 27) 00:11:48.808 10.264 - 10.320: 93.5047% ( 25) 00:11:48.808 10.320 - 10.376: 93.9107% ( 30) 00:11:48.808 10.376 - 10.431: 94.2625% ( 26) 00:11:48.808 10.431 - 10.487: 94.5061% ( 18) 00:11:48.808 10.487 - 10.543: 94.8309% ( 24) 00:11:48.808 10.543 - 10.599: 95.1556% ( 24) 00:11:48.808 10.599 - 10.655: 95.3721% ( 16) 00:11:48.808 10.655 - 10.710: 95.6022% ( 17) 00:11:48.808 10.710 - 10.766: 95.7916% ( 14) 00:11:48.808 10.766 - 10.822: 95.9134% ( 9) 00:11:48.808 10.822 - 10.878: 95.9675% ( 4) 00:11:48.808 10.878 - 10.933: 96.0893% ( 9) 00:11:48.808 10.933 - 10.989: 96.1976% ( 8) 00:11:48.808 10.989 - 11.045: 96.2652% ( 5) 00:11:48.808 11.045 - 11.101: 96.3870% ( 9) 00:11:48.808 11.157 - 11.212: 96.5223% ( 10) 00:11:48.808 11.212 - 11.268: 96.6441% ( 9) 00:11:48.808 11.268 - 11.324: 96.6982% ( 4) 00:11:48.808 11.324 - 11.380: 96.7253% ( 2) 00:11:48.808 11.380 - 11.436: 96.7524% ( 2) 00:11:48.808 11.436 - 11.491: 96.7794% ( 2) 00:11:48.808 11.491 - 11.547: 96.8065% ( 2) 00:11:48.808 11.547 - 11.603: 96.8471% ( 3) 00:11:48.808 11.603 - 11.659: 96.8877% ( 3) 00:11:48.808 11.659 - 11.714: 96.9283% ( 3) 00:11:48.808 11.714 - 11.770: 97.0095% ( 6) 00:11:48.808 11.770 - 11.826: 97.0365% ( 2) 00:11:48.808 11.826 - 11.882: 97.0907% ( 4) 00:11:48.808 11.882 - 11.938: 97.1313% ( 3) 00:11:48.808 11.938 - 11.993: 97.1583% ( 2) 00:11:48.808 11.993 - 12.049: 97.1719% ( 1) 00:11:48.808 12.049 - 12.105: 97.2395% ( 5) 00:11:48.808 12.105 - 12.161: 97.2801% ( 3) 00:11:48.808 12.161 - 12.217: 97.3207% ( 3) 00:11:48.808 12.217 - 12.272: 97.3884% ( 5) 00:11:48.808 12.272 - 12.328: 97.4290% ( 3) 00:11:48.808 12.328 - 12.384: 97.4831% ( 4) 00:11:48.808 12.384 - 12.440: 97.4966% ( 1) 00:11:48.808 12.440 - 12.495: 97.5507% ( 4) 00:11:48.808 12.495 - 12.551: 97.5778% ( 2) 00:11:48.808 12.551 - 12.607: 97.6184% ( 3) 00:11:48.808 12.607 - 12.663: 97.6455% ( 2) 00:11:48.808 12.663 - 12.719: 97.6725% ( 2) 00:11:48.808 12.719 - 12.774: 97.7267% ( 4) 00:11:48.808 12.774 - 12.830: 97.7537% ( 2) 00:11:48.808 12.830 - 12.886: 97.7673% ( 1) 00:11:48.808 12.886 - 12.942: 97.7943% ( 2) 00:11:48.808 12.942 - 12.997: 97.8620% ( 5) 00:11:48.808 12.997 - 13.053: 97.8890% ( 2) 00:11:48.808 13.053 - 13.109: 97.9296% ( 3) 00:11:48.808 13.109 - 13.165: 97.9432% ( 1) 00:11:48.808 13.165 - 13.221: 97.9567% ( 1) 00:11:48.808 13.221 - 13.276: 97.9973% ( 3) 00:11:48.808 13.332 - 13.388: 98.0108% ( 1) 00:11:48.808 13.444 - 13.500: 98.0379% ( 2) 00:11:48.808 13.500 - 13.555: 98.0785% ( 3) 00:11:48.808 13.723 - 13.778: 98.0920% ( 1) 00:11:48.808 13.778 - 13.834: 98.1191% ( 2) 00:11:48.808 13.834 - 13.890: 98.1461% ( 2) 00:11:48.808 14.002 - 14.057: 98.1597% ( 1) 00:11:48.808 14.057 - 14.113: 98.1732% ( 1) 00:11:48.808 14.113 - 14.169: 98.2138% ( 3) 00:11:48.808 14.392 - 14.504: 98.2273% ( 1) 00:11:48.808 14.504 - 14.615: 98.2409% ( 1) 00:11:48.808 14.615 - 14.727: 98.2544% ( 1) 00:11:48.808 15.508 - 15.619: 98.2679% ( 1) 00:11:48.808 16.066 - 16.177: 98.2950% ( 2) 00:11:48.808 16.289 - 16.400: 98.3221% ( 2) 00:11:48.808 16.400 - 16.512: 98.3762% ( 4) 00:11:48.808 16.512 - 16.623: 98.4032% ( 2) 00:11:48.808 16.623 - 16.735: 98.4303% ( 2) 00:11:48.808 16.735 - 16.847: 98.4574% ( 2) 00:11:48.808 16.847 - 16.958: 98.5115% ( 4) 00:11:48.808 16.958 - 17.070: 98.5386% ( 2) 00:11:48.808 17.070 - 17.181: 98.6062% ( 5) 00:11:48.808 17.181 - 17.293: 98.6739% ( 5) 00:11:48.808 17.293 - 17.404: 98.7551% ( 6) 00:11:48.808 17.404 - 17.516: 98.7686% ( 1) 00:11:48.808 17.516 - 17.627: 98.7821% ( 1) 00:11:48.808 17.627 - 17.739: 98.8227% ( 3) 00:11:48.808 17.739 - 17.851: 98.8633% ( 3) 00:11:48.808 17.851 - 17.962: 98.8769% ( 1) 00:11:48.808 17.962 - 18.074: 98.8904% ( 1) 00:11:48.808 18.074 - 18.185: 98.9039% ( 1) 00:11:48.808 18.185 - 18.297: 98.9175% ( 1) 00:11:48.808 18.297 - 18.408: 98.9445% ( 2) 00:11:48.808 18.520 - 18.632: 98.9716% ( 2) 00:11:48.808 18.632 - 18.743: 98.9851% ( 1) 00:11:48.808 18.743 - 18.855: 98.9986% ( 1) 00:11:48.808 18.855 - 18.966: 99.0122% ( 1) 00:11:48.808 18.966 - 19.078: 99.0257% ( 1) 00:11:48.808 19.078 - 19.189: 99.0392% ( 1) 00:11:48.808 19.301 - 19.413: 99.0528% ( 1) 00:11:48.808 19.413 - 19.524: 99.0934% ( 3) 00:11:48.808 19.524 - 19.636: 99.1069% ( 1) 00:11:48.808 19.636 - 19.747: 99.1475% ( 3) 00:11:48.808 19.859 - 19.970: 99.1610% ( 1) 00:11:48.808 19.970 - 20.082: 99.2016% ( 3) 00:11:48.808 20.082 - 20.193: 99.2287% ( 2) 00:11:48.808 20.193 - 20.305: 99.2422% ( 1) 00:11:48.808 20.305 - 20.417: 99.2693% ( 2) 00:11:48.808 20.417 - 20.528: 99.3099% ( 3) 00:11:48.808 20.528 - 20.640: 99.3234% ( 1) 00:11:48.808 20.640 - 20.751: 99.3911% ( 5) 00:11:48.808 20.751 - 20.863: 99.4452% ( 4) 00:11:48.808 20.863 - 20.974: 99.4587% ( 1) 00:11:48.808 20.974 - 21.086: 99.4858% ( 2) 00:11:48.808 21.086 - 21.198: 99.5264% ( 3) 00:11:48.808 21.198 - 21.309: 99.5670% ( 3) 00:11:48.808 21.309 - 21.421: 99.6211% ( 4) 00:11:48.808 21.421 - 21.532: 99.6346% ( 1) 00:11:48.808 21.532 - 21.644: 99.6482% ( 1) 00:11:48.808 21.644 - 21.755: 99.6752% ( 2) 00:11:48.808 21.755 - 21.867: 99.7158% ( 3) 00:11:48.808 21.867 - 21.979: 99.7294% ( 1) 00:11:48.808 21.979 - 22.090: 99.7700% ( 3) 00:11:48.808 22.090 - 22.202: 99.8106% ( 3) 00:11:48.808 22.202 - 22.313: 99.8376% ( 2) 00:11:48.808 22.313 - 22.425: 99.8647% ( 2) 00:11:48.808 22.983 - 23.094: 99.8782% ( 1) 00:11:48.808 23.206 - 23.317: 99.8917% ( 1) 00:11:48.808 23.875 - 23.987: 99.9188% ( 2) 00:11:48.808 23.987 - 24.098: 99.9459% ( 2) 00:11:48.808 24.098 - 24.210: 99.9594% ( 1) 00:11:48.808 31.908 - 32.131: 99.9729% ( 1) 00:11:48.808 35.701 - 35.924: 99.9865% ( 1) 00:11:48.808 205.282 - 206.174: 100.0000% ( 1) 00:11:48.808 00:11:48.808 Complete histogram 00:11:48.808 ================== 00:11:48.808 Range in us Cumulative Count 00:11:48.808 4.044 - 4.072: 0.0135% ( 1) 00:11:48.808 4.128 - 4.156: 0.0271% ( 1) 00:11:48.808 4.184 - 4.212: 0.0541% ( 2) 00:11:48.808 4.295 - 4.323: 0.0677% ( 1) 00:11:48.808 4.323 - 4.351: 0.0947% ( 2) 00:11:48.808 4.379 - 4.407: 0.1083% ( 1) 00:11:48.808 4.435 - 4.463: 0.1353% ( 2) 00:11:48.808 4.463 - 4.491: 0.1624% ( 2) 00:11:48.808 4.491 - 4.518: 0.1894% ( 2) 00:11:48.808 4.518 - 4.546: 0.2030% ( 1) 00:11:48.808 4.546 - 4.574: 0.2436% ( 3) 00:11:48.808 4.602 - 4.630: 0.2571% ( 1) 00:11:48.808 4.630 - 4.658: 0.2706% ( 1) 00:11:48.808 4.658 - 4.686: 0.2842% ( 1) 00:11:48.808 4.686 - 4.714: 0.2977% ( 1) 00:11:48.808 4.714 - 4.742: 0.3112% ( 1) 00:11:48.808 4.769 - 4.797: 0.3248% ( 1) 00:11:48.808 4.825 - 4.853: 0.3518% ( 2) 00:11:48.808 4.853 - 4.881: 0.4195% ( 5) 00:11:48.808 4.909 - 4.937: 0.4871% ( 5) 00:11:48.808 4.937 - 4.965: 0.5683% ( 6) 00:11:48.808 4.965 - 4.993: 0.6631% ( 7) 00:11:48.808 4.993 - 5.020: 0.7578% ( 7) 00:11:48.808 5.020 - 5.048: 0.9202% ( 12) 00:11:48.808 5.048 - 5.076: 1.0825% ( 12) 00:11:48.808 5.076 - 5.104: 1.3126% ( 17) 00:11:48.808 5.104 - 5.132: 1.6103% ( 22) 00:11:48.808 5.132 - 5.160: 1.8674% ( 19) 00:11:48.808 5.160 - 5.188: 2.1245% ( 19) 00:11:48.808 5.188 - 5.216: 2.4763% ( 26) 00:11:48.808 5.216 - 5.244: 2.8958% ( 31) 00:11:48.808 5.244 - 5.272: 3.4641% ( 42) 00:11:48.808 5.272 - 5.299: 4.0054% ( 40) 00:11:48.808 5.299 - 5.327: 4.6820% ( 50) 00:11:48.808 5.327 - 5.355: 5.4127% ( 54) 00:11:48.808 5.355 - 5.383: 5.9269% ( 38) 00:11:48.808 5.383 - 5.411: 6.4953% ( 42) 00:11:48.808 5.411 - 5.439: 6.9147% ( 31) 00:11:48.808 5.439 - 5.467: 7.2530% ( 25) 00:11:48.808 5.467 - 5.495: 7.4831% ( 17) 00:11:48.808 5.495 - 5.523: 7.8214% ( 25) 00:11:48.808 5.523 - 5.550: 7.9838% ( 12) 00:11:48.808 5.550 - 5.578: 8.1191% ( 10) 00:11:48.808 5.578 - 5.606: 8.3356% ( 16) 00:11:48.808 5.606 - 5.634: 8.4980% ( 12) 00:11:48.808 5.634 - 5.662: 8.6604% ( 12) 00:11:48.808 5.662 - 5.690: 9.1610% ( 37) 00:11:48.808 5.690 - 5.718: 9.9188% ( 56) 00:11:48.808 5.718 - 5.746: 10.7442% ( 61) 00:11:48.808 5.746 - 5.774: 12.8146% ( 153) 00:11:48.808 5.774 - 5.801: 15.3451% ( 187) 00:11:48.808 5.801 - 5.829: 16.3464% ( 74) 00:11:48.808 5.829 - 5.857: 18.2003% ( 137) 00:11:48.808 5.857 - 5.885: 19.2152% ( 75) 00:11:48.808 5.885 - 5.913: 19.5805% ( 27) 00:11:48.808 5.913 - 5.941: 19.8512% ( 20) 00:11:48.808 5.941 - 5.969: 19.9865% ( 10) 00:11:48.808 5.969 - 5.997: 20.2165% ( 17) 00:11:48.808 5.997 - 6.025: 20.3654% ( 11) 00:11:48.808 6.025 - 6.052: 20.6089% ( 18) 00:11:48.808 6.052 - 6.080: 20.8796% ( 20) 00:11:48.808 6.080 - 6.108: 21.0419% ( 12) 00:11:48.808 6.108 - 6.136: 21.2585% ( 16) 00:11:48.808 6.136 - 6.164: 21.7185% ( 34) 00:11:48.808 6.164 - 6.192: 22.3004% ( 43) 00:11:48.808 6.192 - 6.220: 22.4493% ( 11) 00:11:48.808 6.220 - 6.248: 22.8823% ( 32) 00:11:48.808 6.248 - 6.276: 23.0311% ( 11) 00:11:48.808 6.276 - 6.303: 23.2070% ( 13) 00:11:48.808 6.303 - 6.331: 23.3153% ( 8) 00:11:48.808 6.331 - 6.359: 23.3694% ( 4) 00:11:48.808 6.359 - 6.387: 23.4235% ( 4) 00:11:48.808 6.387 - 6.415: 23.4777% ( 4) 00:11:48.808 6.415 - 6.443: 23.5318% ( 4) 00:11:48.808 6.443 - 6.471: 23.6265% ( 7) 00:11:48.808 6.471 - 6.499: 23.9783% ( 26) 00:11:48.808 6.499 - 6.527: 24.4114% ( 32) 00:11:48.808 6.527 - 6.555: 24.4520% ( 3) 00:11:48.808 6.555 - 6.582: 24.4926% ( 3) 00:11:48.808 6.582 - 6.610: 24.5467% ( 4) 00:11:48.808 6.666 - 6.694: 24.5737% ( 2) 00:11:48.808 6.778 - 6.806: 24.6008% ( 2) 00:11:48.808 6.806 - 6.833: 24.6143% ( 1) 00:11:48.808 6.833 - 6.861: 24.6279% ( 1) 00:11:48.808 7.029 - 7.057: 24.6414% ( 1) 00:11:48.808 7.140 - 7.196: 24.6549% ( 1) 00:11:48.808 7.196 - 7.252: 24.6820% ( 2) 00:11:48.808 7.308 - 7.363: 24.7091% ( 2) 00:11:48.808 7.419 - 7.475: 24.7361% ( 2) 00:11:48.808 7.475 - 7.531: 24.7632% ( 2) 00:11:48.808 7.531 - 7.587: 24.8038% ( 3) 00:11:48.808 7.587 - 7.642: 24.8579% ( 4) 00:11:48.808 7.642 - 7.698: 24.8985% ( 3) 00:11:48.808 7.698 - 7.754: 24.9526% ( 4) 00:11:48.808 7.754 - 7.810: 24.9662% ( 1) 00:11:48.808 7.810 - 7.865: 24.9932% ( 2) 00:11:48.808 7.921 - 7.977: 25.0068% ( 1) 00:11:48.808 7.977 - 8.033: 25.0338% ( 2) 00:11:48.808 8.033 - 8.089: 25.0744% ( 3) 00:11:48.808 8.089 - 8.144: 25.1150% ( 3) 00:11:48.808 8.144 - 8.200: 25.1556% ( 3) 00:11:48.808 8.200 - 8.256: 25.1962% ( 3) 00:11:48.808 8.256 - 8.312: 25.2233% ( 2) 00:11:48.808 8.312 - 8.367: 25.3045% ( 6) 00:11:48.808 8.367 - 8.423: 25.3857% ( 6) 00:11:48.808 8.423 - 8.479: 25.4398% ( 4) 00:11:48.808 8.479 - 8.535: 25.4533% ( 1) 00:11:48.808 8.535 - 8.591: 25.4668% ( 1) 00:11:48.808 8.591 - 8.646: 25.4804% ( 1) 00:11:48.808 8.758 - 8.814: 25.4939% ( 1) 00:11:48.808 8.870 - 8.925: 25.5210% ( 2) 00:11:48.808 9.037 - 9.093: 25.5616% ( 3) 00:11:48.808 9.093 - 9.148: 25.5751% ( 1) 00:11:48.808 9.204 - 9.260: 25.5886% ( 1) 00:11:48.808 9.260 - 9.316: 25.6022% ( 1) 00:11:48.808 9.316 - 9.372: 25.6292% ( 2) 00:11:48.808 9.372 - 9.427: 25.6698% ( 3) 00:11:48.808 9.427 - 9.483: 25.6834% ( 1) 00:11:48.808 9.483 - 9.539: 25.7104% ( 2) 00:11:48.808 9.539 - 9.595: 25.7240% ( 1) 00:11:48.808 9.818 - 9.874: 25.7375% ( 1) 00:11:48.808 10.208 - 10.264: 25.7510% ( 1) 00:11:48.808 10.320 - 10.376: 25.7645% ( 1) 00:11:48.808 10.376 - 10.431: 25.7916% ( 2) 00:11:48.808 10.431 - 10.487: 25.8863% ( 7) 00:11:48.808 10.487 - 10.543: 26.0893% ( 15) 00:11:48.808 10.543 - 10.599: 26.5900% ( 37) 00:11:48.808 10.599 - 10.655: 27.1719% ( 43) 00:11:48.808 10.655 - 10.710: 28.2003% ( 76) 00:11:48.808 10.710 - 10.766: 29.3911% ( 88) 00:11:48.808 10.766 - 10.822: 31.1637% ( 131) 00:11:48.808 10.822 - 10.878: 33.7348% ( 190) 00:11:48.808 10.878 - 10.933: 36.5629% ( 209) 00:11:48.808 10.933 - 10.989: 39.4723% ( 215) 00:11:48.808 10.989 - 11.045: 42.7064% ( 239) 00:11:48.808 11.045 - 11.101: 45.8322% ( 231) 00:11:48.808 11.101 - 11.157: 49.3369% ( 259) 00:11:48.808 11.157 - 11.212: 52.5710% ( 239) 00:11:48.808 11.212 - 11.268: 56.2517% ( 272) 00:11:48.808 11.268 - 11.324: 59.7564% ( 259) 00:11:48.808 11.324 - 11.380: 63.4641% ( 274) 00:11:48.808 11.380 - 11.436: 66.7118% ( 240) 00:11:48.808 11.436 - 11.491: 70.5413% ( 283) 00:11:48.808 11.491 - 11.547: 73.3694% ( 209) 00:11:48.808 11.547 - 11.603: 76.1705% ( 207) 00:11:48.808 11.603 - 11.659: 78.6604% ( 184) 00:11:48.808 11.659 - 11.714: 81.1231% ( 182) 00:11:48.809 11.714 - 11.770: 83.0717% ( 144) 00:11:48.809 11.770 - 11.826: 84.9662% ( 140) 00:11:48.809 11.826 - 11.882: 86.5223% ( 115) 00:11:48.809 11.882 - 11.938: 87.8078% ( 95) 00:11:48.809 11.938 - 11.993: 89.1069% ( 96) 00:11:48.809 11.993 - 12.049: 90.2571% ( 85) 00:11:48.809 12.049 - 12.105: 91.2043% ( 70) 00:11:48.809 12.105 - 12.161: 92.1110% ( 67) 00:11:48.809 12.161 - 12.217: 92.7876% ( 50) 00:11:48.809 12.217 - 12.272: 93.4777% ( 51) 00:11:48.809 12.272 - 12.328: 94.0595% ( 43) 00:11:48.809 12.328 - 12.384: 94.4790% ( 31) 00:11:48.809 12.384 - 12.440: 94.9932% ( 38) 00:11:48.809 12.440 - 12.495: 95.3721% ( 28) 00:11:48.809 12.495 - 12.551: 95.7510% ( 28) 00:11:48.809 12.551 - 12.607: 95.9675% ( 16) 00:11:48.809 12.607 - 12.663: 96.1840% ( 16) 00:11:48.809 12.663 - 12.719: 96.2923% ( 8) 00:11:48.809 12.719 - 12.774: 96.4411% ( 11) 00:11:48.809 12.774 - 12.830: 96.6441% ( 15) 00:11:48.809 12.830 - 12.886: 96.8336% ( 14) 00:11:48.809 12.886 - 12.942: 96.9283% ( 7) 00:11:48.809 12.942 - 12.997: 97.0501% ( 9) 00:11:48.809 12.997 - 13.053: 97.0907% ( 3) 00:11:48.809 13.053 - 13.109: 97.1313% ( 3) 00:11:48.809 13.109 - 13.165: 97.1989% ( 5) 00:11:48.809 13.165 - 13.221: 97.2530% ( 4) 00:11:48.809 13.221 - 13.276: 97.2801% ( 2) 00:11:48.809 13.332 - 13.388: 97.3072% ( 2) 00:11:48.809 13.388 - 13.444: 97.3342% ( 2) 00:11:48.809 13.444 - 13.500: 97.3748% ( 3) 00:11:48.809 13.500 - 13.555: 97.4290% ( 4) 00:11:48.809 13.723 - 13.778: 97.4425% ( 1) 00:11:48.809 14.002 - 14.057: 97.4560% ( 1) 00:11:48.809 14.057 - 14.113: 97.4696% ( 1) 00:11:48.809 14.113 - 14.169: 97.4831% ( 1) 00:11:48.809 14.169 - 14.225: 97.5237% ( 3) 00:11:48.809 14.504 - 14.615: 97.5643% ( 3) 00:11:48.809 14.615 - 14.727: 97.6319% ( 5) 00:11:48.809 14.727 - 14.838: 97.7267% ( 7) 00:11:48.809 14.838 - 14.950: 97.8078% ( 6) 00:11:48.809 14.950 - 15.061: 97.9026% ( 7) 00:11:48.809 15.061 - 15.173: 98.0379% ( 10) 00:11:48.809 15.173 - 15.285: 98.0920% ( 4) 00:11:48.809 15.285 - 15.396: 98.1597% ( 5) 00:11:48.809 15.396 - 15.508: 98.2003% ( 3) 00:11:48.809 15.508 - 15.619: 98.2409% ( 3) 00:11:48.809 15.619 - 15.731: 98.2544% ( 1) 00:11:48.809 16.066 - 16.177: 98.2679% ( 1) 00:11:48.809 16.177 - 16.289: 98.2950% ( 2) 00:11:48.809 16.289 - 16.400: 98.3897% ( 7) 00:11:48.809 16.400 - 16.512: 98.5250% ( 10) 00:11:48.809 16.512 - 16.623: 98.6333% ( 8) 00:11:48.809 16.623 - 16.735: 98.7415% ( 8) 00:11:48.809 16.735 - 16.847: 98.8498% ( 8) 00:11:48.809 16.847 - 16.958: 98.9039% ( 4) 00:11:48.809 16.958 - 17.070: 98.9445% ( 3) 00:11:48.809 17.070 - 17.181: 98.9716% ( 2) 00:11:48.809 17.181 - 17.293: 99.0122% ( 3) 00:11:48.809 17.293 - 17.404: 99.0663% ( 4) 00:11:48.809 17.404 - 17.516: 99.0934% ( 2) 00:11:48.809 17.516 - 17.627: 99.1204% ( 2) 00:11:48.809 17.627 - 17.739: 99.1475% ( 2) 00:11:48.809 17.739 - 17.851: 99.1746% ( 2) 00:11:48.809 17.851 - 17.962: 99.2016% ( 2) 00:11:48.809 18.074 - 18.185: 99.2287% ( 2) 00:11:48.809 18.185 - 18.297: 99.2693% ( 3) 00:11:48.809 18.297 - 18.408: 99.3505% ( 6) 00:11:48.809 18.408 - 18.520: 99.3911% ( 3) 00:11:48.809 18.520 - 18.632: 99.4452% ( 4) 00:11:48.809 18.855 - 18.966: 99.4587% ( 1) 00:11:48.809 18.966 - 19.078: 99.4723% ( 1) 00:11:48.809 19.189 - 19.301: 99.4858% ( 1) 00:11:48.809 19.859 - 19.970: 99.4993% ( 1) 00:11:48.809 19.970 - 20.082: 99.5264% ( 2) 00:11:48.809 20.082 - 20.193: 99.5399% ( 1) 00:11:48.809 20.193 - 20.305: 99.5670% ( 2) 00:11:48.809 20.305 - 20.417: 99.5805% ( 1) 00:11:48.809 20.417 - 20.528: 99.5940% ( 1) 00:11:48.809 20.528 - 20.640: 99.6076% ( 1) 00:11:48.809 20.751 - 20.863: 99.6346% ( 2) 00:11:48.809 21.309 - 21.421: 99.6482% ( 1) 00:11:48.809 21.421 - 21.532: 99.6752% ( 2) 00:11:48.809 21.979 - 22.090: 99.6888% ( 1) 00:11:48.809 22.090 - 22.202: 99.7023% ( 1) 00:11:48.809 22.202 - 22.313: 99.7158% ( 1) 00:11:48.809 22.425 - 22.536: 99.7294% ( 1) 00:11:48.809 22.648 - 22.760: 99.7429% ( 1) 00:11:48.809 23.206 - 23.317: 99.7564% ( 1) 00:11:48.809 23.540 - 23.652: 99.7835% ( 2) 00:11:48.809 23.652 - 23.764: 99.7970% ( 1) 00:11:48.809 23.875 - 23.987: 99.8512% ( 4) 00:11:48.809 23.987 - 24.098: 99.8917% ( 3) 00:11:48.809 24.545 - 24.656: 99.9053% ( 1) 00:11:48.809 26.664 - 26.776: 99.9188% ( 1) 00:11:48.809 26.887 - 26.999: 99.9323% ( 1) 00:11:48.809 27.668 - 27.780: 99.9459% ( 1) 00:11:48.809 30.346 - 30.569: 99.9594% ( 1) 00:11:48.809 36.594 - 36.817: 99.9729% ( 1) 00:11:48.809 42.618 - 42.841: 99.9865% ( 1) 00:11:48.809 55.114 - 55.337: 100.0000% ( 1) 00:11:48.809 00:11:48.809 00:11:48.809 real 0m1.479s 00:11:48.809 user 0m1.019s 00:11:48.809 sys 0m0.462s 00:11:48.809 06:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.809 06:03:56 -- common/autotest_common.sh@10 -- # set +x 00:11:48.809 ************************************ 00:11:48.809 END TEST nvme_overhead 00:11:48.809 ************************************ 00:11:48.809 06:03:56 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:48.809 06:03:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:48.809 06:03:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.809 06:03:56 -- common/autotest_common.sh@10 -- # set +x 00:11:48.809 ************************************ 00:11:48.809 START TEST nvme_arbitration 00:11:48.809 ************************************ 00:11:48.809 06:03:57 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:49.377 EAL: TSC is not safe to use in SMP mode 00:11:49.377 EAL: TSC is not invariant 00:11:49.377 [2024-05-13 06:03:57.427926] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:53.548 Initializing NVMe Controllers 00:11:53.548 Attaching to 0000:00:06.0 00:11:53.548 Attached to 0000:00:06.0 00:11:53.548 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:53.548 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:11:53.548 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:11:53.548 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:11:53.548 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:53.548 /usr/home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:53.548 Initialization complete. Launching workers. 00:11:53.548 Starting thread on core 1 with urgent priority queue 00:11:53.548 Starting thread on core 2 with urgent priority queue 00:11:53.548 Starting thread on core 3 with urgent priority queue 00:11:53.548 Starting thread on core 0 with urgent priority queue 00:11:53.548 QEMU NVMe Ctrl (12340 ) core 0: 6373.00 IO/s 15.69 secs/100000 ios 00:11:53.548 QEMU NVMe Ctrl (12340 ) core 1: 6353.33 IO/s 15.74 secs/100000 ios 00:11:53.548 QEMU NVMe Ctrl (12340 ) core 2: 6353.00 IO/s 15.74 secs/100000 ios 00:11:53.548 QEMU NVMe Ctrl (12340 ) core 3: 6350.67 IO/s 15.75 secs/100000 ios 00:11:53.548 ======================================================== 00:11:53.548 00:11:53.548 00:11:53.548 real 0m4.485s 00:11:53.548 user 0m13.050s 00:11:53.548 sys 0m0.463s 00:11:53.548 06:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.548 06:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 ************************************ 00:11:53.548 END TEST nvme_arbitration 00:11:53.548 ************************************ 00:11:53.548 06:04:01 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:53.548 06:04:01 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:53.548 06:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.548 06:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:53.548 ************************************ 00:11:53.548 START TEST nvme_single_aen 00:11:53.548 ************************************ 00:11:53.548 06:04:01 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:53.548 [2024-05-13 06:04:01.552242] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:53.548 [2024-05-13 06:04:01.552448] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:53.811 EAL: TSC is not safe to use in SMP mode 00:11:53.811 EAL: TSC is not invariant 00:11:53.811 [2024-05-13 06:04:01.973342] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:53.811 [2024-05-13 06:04:01.980880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:53.811 Asynchronous Event Request test 00:11:53.811 Attaching to 0000:00:06.0 00:11:53.811 Attached to 0000:00:06.0 00:11:53.811 Reset controller to setup AER completions for this process 00:11:53.811 Registering asynchronous event callbacks... 00:11:53.811 Getting orig temperature thresholds of all controllers 00:11:53.811 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:53.811 Setting all controllers temperature threshold low to trigger AER 00:11:53.811 Waiting for all controllers temperature threshold to be set lower 00:11:53.811 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:53.811 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:11:53.811 Waiting for all controllers to trigger AER and reset threshold 00:11:53.811 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:53.811 Cleaning up... 00:11:53.811 00:11:53.811 real 0m0.485s 00:11:53.811 user 0m0.011s 00:11:53.811 sys 0m0.474s 00:11:53.811 06:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.811 06:04:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 ************************************ 00:11:53.811 END TEST nvme_single_aen 00:11:53.811 ************************************ 00:11:53.811 06:04:02 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:53.811 06:04:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:53.811 06:04:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.811 06:04:02 -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 ************************************ 00:11:53.811 START TEST nvme_doorbell_aers 00:11:53.811 ************************************ 00:11:53.811 06:04:02 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:53.811 06:04:02 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:53.811 06:04:02 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:53.811 06:04:02 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:53.811 06:04:02 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:53.811 06:04:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:53.811 06:04:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:53.811 06:04:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:53.811 06:04:02 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:53.811 06:04:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:54.074 06:04:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:54.074 06:04:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:54.074 06:04:02 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:54.074 06:04:02 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /usr/home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:54.643 EAL: TSC is not safe to use in SMP mode 00:11:54.643 EAL: TSC is not invariant 00:11:54.643 [2024-05-13 06:04:02.897265] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:54.643 Executing: test_write_invalid_db 00:11:54.643 Waiting for AER completion... 00:11:54.643 Asynchronous Event received. 00:11:54.643 Error Informaton Log Page received. 00:11:54.643 Success: test_write_invalid_db 00:11:54.643 00:11:54.643 Executing: test_invalid_db_write_overflow_sq 00:11:54.643 Waiting for AER completion... 00:11:54.643 Asynchronous Event received. 00:11:54.643 Error Informaton Log Page received. 00:11:54.643 Success: test_invalid_db_write_overflow_sq 00:11:54.643 00:11:54.643 Executing: test_invalid_db_write_overflow_cq 00:11:54.643 Waiting for AER completion... 00:11:54.643 Asynchronous Event received. 00:11:54.643 Error Informaton Log Page received. 00:11:54.643 Success: test_invalid_db_write_overflow_cq 00:11:54.643 00:11:54.643 00:11:54.643 real 0m0.871s 00:11:54.643 user 0m0.088s 00:11:54.643 sys 0m0.806s 00:11:54.643 06:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.643 06:04:02 -- common/autotest_common.sh@10 -- # set +x 00:11:54.643 ************************************ 00:11:54.643 END TEST nvme_doorbell_aers 00:11:54.643 ************************************ 00:11:54.900 06:04:03 -- nvme/nvme.sh@97 -- # uname 00:11:54.900 06:04:03 -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:11:54.900 06:04:03 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:54.900 06:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:54.900 06:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:54.900 06:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:54.900 ************************************ 00:11:54.900 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:54.900 ************************************ 00:11:54.900 06:04:03 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:54.900 * Looking for test storage... 00:11:54.900 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:11:54.900 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:54.900 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:54.900 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:54.900 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:54.900 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:54.900 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:54.900 06:04:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:54.900 06:04:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:54.900 06:04:03 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:54.900 06:04:03 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:54.900 06:04:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:54.900 06:04:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:54.900 06:04:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:54.900 06:04:03 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:54.900 06:04:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:55.157 06:04:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:55.157 06:04:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:55.157 06:04:03 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:11:55.157 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:11:55.157 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:11:55.157 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=54582 00:11:55.157 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:55.157 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:55.157 06:04:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 54582 00:11:55.157 06:04:03 -- common/autotest_common.sh@819 -- # '[' -z 54582 ']' 00:11:55.157 06:04:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.157 06:04:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:55.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.157 06:04:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.157 06:04:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:55.157 06:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:55.157 [2024-05-13 06:04:03.273762] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:55.157 [2024-05-13 06:04:03.274176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:11:55.415 EAL: TSC is not safe to use in SMP mode 00:11:55.415 EAL: TSC is not invariant 00:11:55.415 [2024-05-13 06:04:03.697303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.673 [2024-05-13 06:04:03.784914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:55.673 [2024-05-13 06:04:03.785309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.673 [2024-05-13 06:04:03.785149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.673 [2024-05-13 06:04:03.785239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.673 [2024-05-13 06:04:03.785312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.932 06:04:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:55.932 06:04:04 -- common/autotest_common.sh@852 -- # return 0 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:11:55.932 06:04:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:55.932 06:04:04 -- common/autotest_common.sh@10 -- # set +x 00:11:55.932 [2024-05-13 06:04:04.169857] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:55.932 nvme0n1 00:11:55.932 06:04:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:55.932 06:04:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:55.932 06:04:04 -- common/autotest_common.sh@10 -- # set +x 00:11:55.932 true 00:11:55.932 06:04:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1715580244 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=54590 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:55.932 06:04:04 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:58.465 06:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.465 06:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.465 [2024-05-13 06:04:06.302022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:11:58.465 [2024-05-13 06:04:06.302129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:58.465 [2024-05-13 06:04:06.302148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:58.465 [2024-05-13 06:04:06.302155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:58.465 [2024-05-13 06:04:06.302776] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:58.465 06:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.465 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 54590 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 54590 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 54590 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:58.465 06:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:58.465 06:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.465 06:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.WW621Q 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.VzyTGV 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 54582 00:11:58.465 06:04:06 -- common/autotest_common.sh@926 -- # '[' -z 54582 ']' 00:11:58.465 06:04:06 -- common/autotest_common.sh@930 -- # kill -0 54582 00:11:58.465 06:04:06 -- common/autotest_common.sh@931 -- # uname 00:11:58.465 06:04:06 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:11:58.465 06:04:06 -- common/autotest_common.sh@934 -- # ps -c -o command 54582 00:11:58.465 06:04:06 -- common/autotest_common.sh@934 -- # tail -1 00:11:58.465 06:04:06 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:11:58.465 06:04:06 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:11:58.465 killing process with pid 54582 00:11:58.465 06:04:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54582' 00:11:58.465 06:04:06 -- common/autotest_common.sh@945 -- # kill 54582 00:11:58.465 06:04:06 -- common/autotest_common.sh@950 -- # wait 54582 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:58.465 06:04:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:58.465 00:11:58.465 real 0m3.587s 00:11:58.465 user 0m11.790s 00:11:58.465 sys 0m0.705s 00:11:58.465 06:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.465 06:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.465 ************************************ 00:11:58.465 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:58.465 ************************************ 00:11:58.465 06:04:06 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:58.465 06:04:06 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:58.465 06:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:58.465 06:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.465 06:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:58.465 ************************************ 00:11:58.465 START TEST nvme_fio 00:11:58.465 ************************************ 00:11:58.465 06:04:06 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:11:58.465 06:04:06 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/usr/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:58.465 06:04:06 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:58.465 06:04:06 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:58.465 06:04:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:58.465 06:04:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:58.465 06:04:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:58.465 06:04:06 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:58.465 06:04:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:58.465 06:04:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:58.465 06:04:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:11:58.465 06:04:06 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:11:58.465 06:04:06 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:58.465 06:04:06 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:58.465 06:04:06 -- nvme/nvme.sh@35 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:58.465 06:04:06 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:59.035 EAL: TSC is not safe to use in SMP mode 00:11:59.035 EAL: TSC is not invariant 00:11:59.035 [2024-05-13 06:04:07.136228] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:59.035 06:04:07 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:59.035 06:04:07 -- nvme/nvme.sh@38 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:59.295 EAL: TSC is not safe to use in SMP mode 00:11:59.295 EAL: TSC is not invariant 00:11:59.555 [2024-05-13 06:04:07.616311] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:11:59.555 06:04:07 -- nvme/nvme.sh@41 -- # bs=4096 00:11:59.555 06:04:07 -- nvme/nvme.sh@43 -- # fio_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:59.555 06:04:07 -- common/autotest_common.sh@1339 -- # fio_plugin /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:59.555 06:04:07 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:11:59.555 06:04:07 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:59.555 06:04:07 -- common/autotest_common.sh@1318 -- # local sanitizers 00:11:59.555 06:04:07 -- common/autotest_common.sh@1319 -- # local plugin=/usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.555 06:04:07 -- common/autotest_common.sh@1320 -- # shift 00:11:59.555 06:04:07 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:11:59.555 06:04:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # grep libasan 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:11:59.555 06:04:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:11:59.555 06:04:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # ldd /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:11:59.555 06:04:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:11:59.555 06:04:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:11:59.555 06:04:07 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /usr/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:59.555 06:04:07 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /usr/home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:59.555 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:59.555 fio-3.35 00:11:59.555 Starting 1 thread 00:12:00.125 EAL: TSC is not safe to use in SMP mode 00:12:00.125 EAL: TSC is not invariant 00:12:00.125 [2024-05-13 06:04:08.200467] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:06.720 00:12:06.720 test: (groupid=0, jobs=1): err= 0: pid=102818: Mon May 13 06:04:14 2024 00:12:06.720 read: IOPS=54.2k, BW=212MiB/s (222MB/s)(423MiB/2001msec) 00:12:06.720 slat (nsec): min=430, max=15788, avg=496.37, stdev=165.67 00:12:06.720 clat (usec): min=314, max=4320, avg=1181.29, stdev=190.28 00:12:06.720 lat (usec): min=315, max=4336, avg=1181.79, stdev=190.35 00:12:06.720 clat percentiles (usec): 00:12:06.720 | 1.00th=[ 906], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1045], 00:12:06.720 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1172], 60.00th=[ 1205], 00:12:06.720 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1352], 95.00th=[ 1401], 00:12:06.720 | 99.00th=[ 1844], 99.50th=[ 2212], 99.90th=[ 3195], 99.95th=[ 3523], 00:12:06.720 | 99.99th=[ 4178] 00:12:06.720 bw ( KiB/s): min=208640, max=222162, per=99.45%, avg=215438.33, stdev=6761.31, samples=3 00:12:06.720 iops : min=52160, max=55540, avg=53859.33, stdev=1690.08, samples=3 00:12:06.720 write: IOPS=54.0k, BW=211MiB/s (221MB/s)(422MiB/2001msec); 0 zone resets 00:12:06.720 slat (nsec): min=454, max=30202, avg=887.18, stdev=321.01 00:12:06.720 clat (usec): min=324, max=5618, avg=1181.81, stdev=194.05 00:12:06.720 lat (usec): min=324, max=5623, avg=1182.70, stdev=194.13 00:12:06.721 clat percentiles (usec): 00:12:06.721 | 1.00th=[ 914], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1045], 00:12:06.721 | 30.00th=[ 1090], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1205], 00:12:06.721 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1352], 95.00th=[ 1401], 00:12:06.721 | 99.00th=[ 1860], 99.50th=[ 2245], 99.90th=[ 3195], 99.95th=[ 3621], 00:12:06.721 | 99.99th=[ 4146] 00:12:06.721 bw ( KiB/s): min=208146, max=219938, per=99.37%, avg=214632.67, stdev=5984.10, samples=3 00:12:06.721 iops : min=52036, max=54984, avg=53657.67, stdev=1496.03, samples=3 00:12:06.721 lat (usec) : 500=0.04%, 750=0.22%, 1000=10.80% 00:12:06.721 lat (msec) : 2=88.21%, 4=0.72%, 10=0.02% 00:12:06.721 cpu : usr=100.00%, sys=0.00%, ctx=23, majf=0, minf=3 00:12:06.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:12:06.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.721 issued rwts: total=108374,108052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.721 00:12:06.721 Run status group 0 (all jobs): 00:12:06.721 READ: bw=212MiB/s (222MB/s), 212MiB/s-212MiB/s (222MB/s-222MB/s), io=423MiB (444MB), run=2001-2001msec 00:12:06.721 WRITE: bw=211MiB/s (221MB/s), 211MiB/s-211MiB/s (221MB/s-221MB/s), io=422MiB (443MB), run=2001-2001msec 00:12:07.290 06:04:15 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:07.290 06:04:15 -- nvme/nvme.sh@46 -- # true 00:12:07.290 00:12:07.290 real 0m8.660s 00:12:07.290 user 0m6.880s 00:12:07.290 sys 0m1.719s 00:12:07.290 06:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.290 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.291 ************************************ 00:12:07.291 END TEST nvme_fio 00:12:07.291 ************************************ 00:12:07.291 00:12:07.291 real 0m29.498s 00:12:07.291 user 0m36.486s 00:12:07.291 sys 0m10.923s 00:12:07.291 06:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.291 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.291 ************************************ 00:12:07.291 END TEST nvme 00:12:07.291 ************************************ 00:12:07.291 06:04:15 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:07.291 06:04:15 -- spdk/autotest.sh@227 -- # run_test nvme_scc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:07.291 06:04:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:07.291 06:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.291 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.291 ************************************ 00:12:07.291 START TEST nvme_scc 00:12:07.291 ************************************ 00:12:07.291 06:04:15 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:07.291 * Looking for test storage... 00:12:07.291 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:12:07.291 06:04:15 -- cuse/common.sh@9 -- # source /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:07.291 06:04:15 -- nvme/functions.sh@7 -- # dirname /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:07.291 06:04:15 -- nvme/functions.sh@7 -- # readlink -f /usr/home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:07.291 06:04:15 -- nvme/functions.sh@7 -- # rootdir=/usr/home/vagrant/spdk_repo/spdk 00:12:07.291 06:04:15 -- nvme/functions.sh@8 -- # source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.291 06:04:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.291 06:04:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.291 06:04:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.291 06:04:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:12:07.291 06:04:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:12:07.291 06:04:15 -- paths/export.sh@4 -- # export PATH 00:12:07.291 06:04:15 -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:12:07.291 06:04:15 -- nvme/functions.sh@10 -- # ctrls=() 00:12:07.291 06:04:15 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:07.291 06:04:15 -- nvme/functions.sh@11 -- # nvmes=() 00:12:07.291 06:04:15 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:07.291 06:04:15 -- nvme/functions.sh@12 -- # bdfs=() 00:12:07.291 06:04:15 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:07.291 06:04:15 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:07.291 06:04:15 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:07.291 06:04:15 -- nvme/functions.sh@14 -- # nvme_name= 00:12:07.291 06:04:15 -- cuse/common.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.291 06:04:15 -- nvme/nvme_scc.sh@12 -- # uname 00:12:07.551 06:04:15 -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:12:07.551 06:04:15 -- nvme/nvme_scc.sh@12 -- # exit 0 00:12:07.551 00:12:07.551 real 0m0.197s 00:12:07.551 user 0m0.120s 00:12:07.551 sys 0m0.120s 00:12:07.551 06:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.551 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.551 ************************************ 00:12:07.551 END TEST nvme_scc 00:12:07.551 ************************************ 00:12:07.551 06:04:15 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:12:07.551 06:04:15 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:12:07.551 06:04:15 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:12:07.551 06:04:15 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:12:07.551 06:04:15 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:12:07.551 06:04:15 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:07.551 06:04:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:07.551 06:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.551 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.551 ************************************ 00:12:07.551 START TEST nvme_rpc 00:12:07.551 ************************************ 00:12:07.551 06:04:15 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:07.551 * Looking for test storage... 00:12:07.551 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:12:07.551 06:04:15 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.551 06:04:15 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:07.551 06:04:15 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:07.551 06:04:15 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:07.551 06:04:15 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:07.551 06:04:15 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:07.551 06:04:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:07.551 06:04:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:07.551 06:04:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:07.551 06:04:15 -- common/autotest_common.sh@1499 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:07.551 06:04:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:07.811 06:04:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:12:07.811 06:04:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:12:07.811 06:04:15 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:12:07.811 06:04:15 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:12:07.811 06:04:15 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=54798 00:12:07.811 06:04:15 -- nvme/nvme_rpc.sh@15 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:07.811 06:04:15 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:07.811 06:04:15 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 54798 00:12:07.811 06:04:15 -- common/autotest_common.sh@819 -- # '[' -z 54798 ']' 00:12:07.811 06:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.811 06:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.811 06:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.811 06:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.811 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:07.811 [2024-05-13 06:04:15.905535] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:07.811 [2024-05-13 06:04:15.905780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:08.072 EAL: TSC is not safe to use in SMP mode 00:12:08.072 EAL: TSC is not invariant 00:12:08.072 [2024-05-13 06:04:16.325041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:08.332 [2024-05-13 06:04:16.413923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.332 [2024-05-13 06:04:16.414133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.332 [2024-05-13 06:04:16.414132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.593 06:04:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.593 06:04:16 -- common/autotest_common.sh@852 -- # return 0 00:12:08.593 06:04:16 -- nvme/nvme_rpc.sh@21 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:12:08.853 [2024-05-13 06:04:16.961998] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:12:08.853 Nvme0n1 00:12:08.853 06:04:17 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:08.853 06:04:17 -- nvme/nvme_rpc.sh@32 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:09.113 request: 00:12:09.113 { 00:12:09.113 "filename": "non_existing_file", 00:12:09.113 "bdev_name": "Nvme0n1", 00:12:09.113 "method": "bdev_nvme_apply_firmware", 00:12:09.113 "req_id": 1 00:12:09.113 } 00:12:09.113 Got JSON-RPC error response 00:12:09.113 response: 00:12:09.113 { 00:12:09.113 "code": -32603, 00:12:09.113 "message": "open file failed." 00:12:09.113 } 00:12:09.113 06:04:17 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:09.113 06:04:17 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:09.113 06:04:17 -- nvme/nvme_rpc.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:09.113 06:04:17 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:09.113 06:04:17 -- nvme/nvme_rpc.sh@40 -- # killprocess 54798 00:12:09.113 06:04:17 -- common/autotest_common.sh@926 -- # '[' -z 54798 ']' 00:12:09.113 06:04:17 -- common/autotest_common.sh@930 -- # kill -0 54798 00:12:09.113 06:04:17 -- common/autotest_common.sh@931 -- # uname 00:12:09.113 06:04:17 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:12:09.113 06:04:17 -- common/autotest_common.sh@934 -- # ps -c -o command 54798 00:12:09.113 06:04:17 -- common/autotest_common.sh@934 -- # tail -1 00:12:09.113 06:04:17 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:12:09.113 06:04:17 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:12:09.113 killing process with pid 54798 00:12:09.113 06:04:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54798' 00:12:09.113 06:04:17 -- common/autotest_common.sh@945 -- # kill 54798 00:12:09.113 06:04:17 -- common/autotest_common.sh@950 -- # wait 54798 00:12:09.373 00:12:09.373 real 0m1.953s 00:12:09.373 user 0m3.306s 00:12:09.373 sys 0m0.729s 00:12:09.373 06:04:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.373 06:04:17 -- common/autotest_common.sh@10 -- # set +x 00:12:09.373 ************************************ 00:12:09.373 END TEST nvme_rpc 00:12:09.373 ************************************ 00:12:09.373 06:04:17 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:09.373 06:04:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:09.373 06:04:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.373 06:04:17 -- common/autotest_common.sh@10 -- # set +x 00:12:09.373 ************************************ 00:12:09.373 START TEST nvme_rpc_timeouts 00:12:09.373 ************************************ 00:12:09.373 06:04:17 -- common/autotest_common.sh@1104 -- # /usr/home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:09.633 * Looking for test storage... 00:12:09.633 * Found test storage at /usr/home/vagrant/spdk_repo/spdk/test/nvme 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_54827 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_54827 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=54854 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 54854 00:12:09.633 06:04:17 -- common/autotest_common.sh@819 -- # '[' -z 54854 ']' 00:12:09.633 06:04:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.633 06:04:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:09.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.633 06:04:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.633 06:04:17 -- nvme/nvme_rpc_timeouts.sh@24 -- # /usr/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:09.633 06:04:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:09.633 06:04:17 -- common/autotest_common.sh@10 -- # set +x 00:12:09.633 [2024-05-13 06:04:17.846821] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:09.633 [2024-05-13 06:04:17.847088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 ] 00:12:10.202 EAL: TSC is not safe to use in SMP mode 00:12:10.202 EAL: TSC is not invariant 00:12:10.202 [2024-05-13 06:04:18.268382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:10.202 [2024-05-13 06:04:18.354449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.202 [2024-05-13 06:04:18.354637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.202 [2024-05-13 06:04:18.354632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.462 06:04:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:10.462 06:04:18 -- common/autotest_common.sh@852 -- # return 0 00:12:10.462 Checking default timeout settings: 00:12:10.462 06:04:18 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:10.462 06:04:18 -- nvme/nvme_rpc_timeouts.sh@30 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:10.719 Making settings changes with rpc: 00:12:10.719 06:04:19 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:10.719 06:04:19 -- nvme/nvme_rpc_timeouts.sh@34 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:10.976 Check default vs. modified settings: 00:12:10.976 06:04:19 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:10.976 06:04:19 -- nvme/nvme_rpc_timeouts.sh@37 -- # /usr/home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:11.235 Setting action_on_timeout is changed as expected. 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:11.235 Setting timeout_us is changed as expected. 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:11.235 Setting timeout_admin_us is changed as expected. 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_54827 /tmp/settings_modified_54827 00:12:11.235 06:04:19 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 54854 00:12:11.235 06:04:19 -- common/autotest_common.sh@926 -- # '[' -z 54854 ']' 00:12:11.235 06:04:19 -- common/autotest_common.sh@930 -- # kill -0 54854 00:12:11.235 06:04:19 -- common/autotest_common.sh@931 -- # uname 00:12:11.235 06:04:19 -- common/autotest_common.sh@931 -- # '[' FreeBSD = Linux ']' 00:12:11.235 06:04:19 -- common/autotest_common.sh@934 -- # ps -c -o command 54854 00:12:11.235 06:04:19 -- common/autotest_common.sh@934 -- # tail -1 00:12:11.235 06:04:19 -- common/autotest_common.sh@934 -- # process_name=spdk_tgt 00:12:11.235 06:04:19 -- common/autotest_common.sh@936 -- # '[' spdk_tgt = sudo ']' 00:12:11.235 killing process with pid 54854 00:12:11.235 06:04:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54854' 00:12:11.235 06:04:19 -- common/autotest_common.sh@945 -- # kill 54854 00:12:11.235 06:04:19 -- common/autotest_common.sh@950 -- # wait 54854 00:12:11.495 RPC TIMEOUT SETTING TEST PASSED. 00:12:11.495 06:04:19 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:11.495 00:12:11.495 real 0m2.056s 00:12:11.495 user 0m3.758s 00:12:11.495 sys 0m0.663s 00:12:11.495 06:04:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.495 06:04:19 -- common/autotest_common.sh@10 -- # set +x 00:12:11.495 ************************************ 00:12:11.495 END TEST nvme_rpc_timeouts 00:12:11.495 ************************************ 00:12:11.495 06:04:19 -- spdk/autotest.sh@251 -- # '[' 0 -eq 0 ']' 00:12:11.495 06:04:19 -- spdk/autotest.sh@251 -- # uname -s 00:12:11.495 06:04:19 -- spdk/autotest.sh@251 -- # '[' FreeBSD = Linux ']' 00:12:11.495 06:04:19 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:12:11.495 06:04:19 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:12:11.495 06:04:19 -- spdk/autotest.sh@268 -- # timing_exit lib 00:12:11.495 06:04:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:11.495 06:04:19 -- common/autotest_common.sh@10 -- # set +x 00:12:11.754 06:04:19 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:12:11.754 06:04:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:12:11.755 06:04:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:12:11.755 06:04:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:12:11.755 06:04:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:12:11.755 06:04:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:12:11.755 06:04:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:12:11.755 06:04:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:12:11.755 06:04:19 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:12:11.755 06:04:19 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:12:11.755 06:04:19 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:12:11.755 06:04:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:11.755 06:04:19 -- common/autotest_common.sh@10 -- # set +x 00:12:11.755 06:04:19 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:12:11.755 06:04:19 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:12:11.755 06:04:19 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:12:11.755 06:04:19 -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 setup.sh cleanup function not yet supported on FreeBSD 00:12:12.322 06:04:20 -- common/autotest_common.sh@1436 -- # return 0 00:12:12.322 06:04:20 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:12:12.322 06:04:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:12.322 06:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 06:04:20 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:12:12.322 06:04:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:12.322 06:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:12.322 06:04:20 -- spdk/autotest.sh@390 -- # chmod a+r /usr/home/vagrant/spdk_repo/spdk/../output/timing.txt 00:12:12.322 06:04:20 -- spdk/autotest.sh@392 -- # [[ -f /usr/home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:12:12.322 06:04:20 -- spdk/autotest.sh@394 -- # hash lcov 00:12:12.322 /usr/home/vagrant/spdk_repo/spdk/autotest.sh: line 394: hash: lcov: not found 00:12:12.581 06:04:20 -- common/autobuild_common.sh@15 -- $ source /usr/home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.581 06:04:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:12.581 06:04:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.581 06:04:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.581 06:04:20 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:12:12.581 06:04:20 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:12:12.581 06:04:20 -- paths/export.sh@4 -- $ export PATH 00:12:12.581 06:04:20 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:12:12.581 06:04:20 -- common/autobuild_common.sh@434 -- $ out=/usr/home/vagrant/spdk_repo/spdk/../output 00:12:12.581 06:04:20 -- common/autobuild_common.sh@435 -- $ date +%s 00:12:12.581 06:04:20 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715580260.XXXXXX 00:12:12.581 06:04:20 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715580260.XXXXXX.YYL2jPFJ 00:12:12.581 06:04:20 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:12:12.581 06:04:20 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:12:12.581 06:04:20 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/' 00:12:12.581 06:04:20 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:12.581 06:04:20 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /usr/home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /usr/home/vagrant/spdk_repo/spdk/dpdk/ --exclude /usr/home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:12.581 06:04:20 -- common/autobuild_common.sh@451 -- $ get_config_params 00:12:12.581 06:04:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:12:12.581 06:04:20 -- common/autotest_common.sh@10 -- $ set +x 00:12:12.581 06:04:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:12:12.581 06:04:20 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:12:12.581 06:04:20 -- spdk/autopackage.sh@11 -- $ cd /usr/home/vagrant/spdk_repo/spdk 00:12:12.581 06:04:20 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:12:12.581 06:04:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:12:12.581 06:04:20 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:12:12.581 06:04:20 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:12:12.581 06:04:20 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:12:12.581 06:04:20 -- common/autotest_common.sh@10 -- $ set +x 00:12:12.581 06:04:20 -- spdk/autopackage.sh@26 -- $ [[ /usr/bin/clang == *clang* ]] 00:12:12.581 06:04:20 -- spdk/autopackage.sh@27 -- $ nproc 00:12:12.581 06:04:20 -- spdk/autopackage.sh@27 -- $ jobs=5 00:12:12.581 06:04:20 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:12:12.581 06:04:20 -- spdk/autopackage.sh@28 -- $ uname -s 00:12:12.581 06:04:20 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:12:12.581 06:04:20 -- spdk/autopackage.sh@32 -- $ export LD=ld.lld 00:12:12.581 06:04:20 -- spdk/autopackage.sh@32 -- $ LD=ld.lld 00:12:12.581 06:04:20 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:12:12.581 06:04:20 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:12:12.581 06:04:20 -- spdk/autopackage.sh@40 -- $ get_config_params 00:12:12.581 06:04:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:12:12.581 06:04:20 -- common/autotest_common.sh@10 -- $ set +x 00:12:12.581 06:04:20 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:12:12.581 06:04:20 -- spdk/autopackage.sh@41 -- $ /usr/home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-lto 00:12:12.839 Notice: Vhost, rte_vhost library, virtio, and fuse 00:12:12.839 are only supported on Linux. Turning off default feature. 00:12:12.839 Using default SPDK env in /usr/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:12.839 Using default DPDK in /usr/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:13.096 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:12:13.096 Using 'verbs' RDMA provider 00:12:23.419 Configuring ISA-L (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:12:33.404 Configuring ISA-L-crypto (logfile: /usr/home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:12:33.404 Creating mk/config.mk...done. 00:12:33.404 Creating mk/cc.flags.mk...done. 00:12:33.404 Type 'gmake' to build. 00:12:33.404 06:04:40 -- spdk/autopackage.sh@43 -- $ gmake -j10 00:12:33.404 gmake[1]: Nothing to be done for 'all'. 00:12:33.404 ps: stdin: not a terminal 00:12:37.602 The Meson build system 00:12:37.602 Version: 1.3.1 00:12:37.602 Source dir: /usr/home/vagrant/spdk_repo/spdk/dpdk 00:12:37.602 Build dir: /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:37.602 Build type: native build 00:12:37.602 Program cat found: YES (/bin/cat) 00:12:37.602 Project name: DPDK 00:12:37.602 Project version: 23.11.0 00:12:37.602 C compiler for the host machine: /usr/bin/clang (clang 14.0.5 "FreeBSD clang version 14.0.5 (https://github.com/llvm/llvm-project.git llvmorg-14.0.5-0-gc12386ae247c)") 00:12:37.602 C linker for the host machine: /usr/bin/clang ld.lld 14.0.5 00:12:37.602 Host machine cpu family: x86_64 00:12:37.602 Host machine cpu: x86_64 00:12:37.602 Message: ## Building in Developer Mode ## 00:12:37.602 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:12:37.602 Program check-symbols.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:12:37.602 Program options-ibverbs-static.sh found: YES (/usr/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:12:37.602 Program python3 found: YES (/usr/local/bin/python3.9) 00:12:37.602 Program cat found: YES (/bin/cat) 00:12:37.602 Compiler for C supports arguments -march=native: YES 00:12:37.602 Checking for size of "void *" : 8 00:12:37.602 Checking for size of "void *" : 8 (cached) 00:12:37.602 Library m found: YES 00:12:37.602 Library numa found: NO 00:12:37.602 Library fdt found: NO 00:12:37.602 Library execinfo found: YES 00:12:37.602 Has header "execinfo.h" : YES 00:12:37.602 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.0.3 00:12:37.602 Run-time dependency libarchive found: NO (tried pkgconfig) 00:12:37.602 Run-time dependency libbsd found: NO (tried pkgconfig) 00:12:37.602 Run-time dependency jansson found: NO (tried pkgconfig) 00:12:37.602 Run-time dependency openssl found: YES 3.0.13 00:12:37.602 Run-time dependency libpcap found: NO (tried pkgconfig) 00:12:37.602 Library pcap found: YES 00:12:37.602 Has header "pcap.h" with dependency -lpcap: YES 00:12:37.602 Compiler for C supports arguments -Wcast-qual: YES 00:12:37.602 Compiler for C supports arguments -Wdeprecated: YES 00:12:37.602 Compiler for C supports arguments -Wformat: YES 00:12:37.602 Compiler for C supports arguments -Wformat-nonliteral: YES 00:12:37.602 Compiler for C supports arguments -Wformat-security: YES 00:12:37.602 Compiler for C supports arguments -Wmissing-declarations: YES 00:12:37.602 Compiler for C supports arguments -Wmissing-prototypes: YES 00:12:37.602 Compiler for C supports arguments -Wnested-externs: YES 00:12:37.602 Compiler for C supports arguments -Wold-style-definition: YES 00:12:37.602 Compiler for C supports arguments -Wpointer-arith: YES 00:12:37.602 Compiler for C supports arguments -Wsign-compare: YES 00:12:37.602 Compiler for C supports arguments -Wstrict-prototypes: YES 00:12:37.602 Compiler for C supports arguments -Wundef: YES 00:12:37.602 Compiler for C supports arguments -Wwrite-strings: YES 00:12:37.602 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:12:37.602 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:12:37.602 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:12:37.602 Compiler for C supports arguments -mavx512f: YES 00:12:37.602 Checking if "AVX512 checking" compiles: YES 00:12:37.602 Fetching value of define "__SSE4_2__" : 1 00:12:37.602 Fetching value of define "__AES__" : 1 00:12:37.602 Fetching value of define "__AVX__" : 1 00:12:37.602 Fetching value of define "__AVX2__" : 1 00:12:37.602 Fetching value of define "__AVX512BW__" : 1 00:12:37.602 Fetching value of define "__AVX512CD__" : 1 00:12:37.602 Fetching value of define "__AVX512DQ__" : 1 00:12:37.602 Fetching value of define "__AVX512F__" : 1 00:12:37.602 Fetching value of define "__AVX512VL__" : 1 00:12:37.602 Fetching value of define "__PCLMUL__" : 1 00:12:37.602 Fetching value of define "__RDRND__" : 1 00:12:37.602 Fetching value of define "__RDSEED__" : 1 00:12:37.602 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:12:37.602 Fetching value of define "__znver1__" : (undefined) 00:12:37.602 Fetching value of define "__znver2__" : (undefined) 00:12:37.602 Fetching value of define "__znver3__" : (undefined) 00:12:37.602 Fetching value of define "__znver4__" : (undefined) 00:12:37.602 Compiler for C supports arguments -Wno-format-truncation: NO 00:12:37.602 Message: lib/log: Defining dependency "log" 00:12:37.602 Message: lib/kvargs: Defining dependency "kvargs" 00:12:37.602 Message: lib/telemetry: Defining dependency "telemetry" 00:12:37.602 Checking if "Detect argument count for CPU_OR" compiles: YES 00:12:37.602 Checking for function "getentropy" : YES 00:12:37.602 Message: lib/eal: Defining dependency "eal" 00:12:37.602 Message: lib/ring: Defining dependency "ring" 00:12:37.602 Message: lib/rcu: Defining dependency "rcu" 00:12:37.602 Message: lib/mempool: Defining dependency "mempool" 00:12:37.602 Message: lib/mbuf: Defining dependency "mbuf" 00:12:37.602 Fetching value of define "__PCLMUL__" : 1 (cached) 00:12:37.602 Fetching value of define "__AVX512F__" : 1 (cached) 00:12:37.602 Fetching value of define "__AVX512BW__" : 1 (cached) 00:12:37.602 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:12:37.602 Fetching value of define "__AVX512VL__" : 1 (cached) 00:12:37.602 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:12:37.602 Compiler for C supports arguments -mpclmul: YES 00:12:37.602 Compiler for C supports arguments -maes: YES 00:12:37.602 Compiler for C supports arguments -mavx512f: YES (cached) 00:12:37.602 Compiler for C supports arguments -mavx512bw: YES 00:12:37.602 Compiler for C supports arguments -mavx512dq: YES 00:12:37.602 Compiler for C supports arguments -mavx512vl: YES 00:12:37.602 Compiler for C supports arguments -mvpclmulqdq: YES 00:12:37.602 Compiler for C supports arguments -mavx2: YES 00:12:37.602 Compiler for C supports arguments -mavx: YES 00:12:37.602 Message: lib/net: Defining dependency "net" 00:12:37.602 Message: lib/meter: Defining dependency "meter" 00:12:37.602 Message: lib/ethdev: Defining dependency "ethdev" 00:12:37.602 Message: lib/pci: Defining dependency "pci" 00:12:37.602 Message: lib/cmdline: Defining dependency "cmdline" 00:12:37.602 Message: lib/hash: Defining dependency "hash" 00:12:37.602 Message: lib/timer: Defining dependency "timer" 00:12:37.602 Message: lib/compressdev: Defining dependency "compressdev" 00:12:37.602 Message: lib/cryptodev: Defining dependency "cryptodev" 00:12:37.602 Message: lib/dmadev: Defining dependency "dmadev" 00:12:37.602 Compiler for C supports arguments -Wno-cast-qual: YES 00:12:37.602 Message: lib/reorder: Defining dependency "reorder" 00:12:37.602 Message: lib/security: Defining dependency "security" 00:12:37.602 Has header "linux/userfaultfd.h" : NO 00:12:37.602 Has header "linux/vduse.h" : NO 00:12:37.602 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:12:37.602 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:12:37.602 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:12:37.602 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:12:37.602 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:12:37.602 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:12:37.602 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:12:37.602 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:12:37.602 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:12:37.602 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:12:37.602 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:12:37.602 Program doxygen found: YES (/usr/local/bin/doxygen) 00:12:37.602 Configuring doxy-api-html.conf using configuration 00:12:37.602 Configuring doxy-api-man.conf using configuration 00:12:37.602 Program mandb found: NO 00:12:37.602 Program sphinx-build found: NO 00:12:37.602 Configuring rte_build_config.h using configuration 00:12:37.602 Message: 00:12:37.602 ================= 00:12:37.602 Applications Enabled 00:12:37.602 ================= 00:12:37.602 00:12:37.602 apps: 00:12:37.602 00:12:37.602 00:12:37.602 Message: 00:12:37.602 ================= 00:12:37.602 Libraries Enabled 00:12:37.602 ================= 00:12:37.602 00:12:37.602 libs: 00:12:37.602 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:12:37.602 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:12:37.602 cryptodev, dmadev, reorder, security, 00:12:37.602 00:12:37.602 Message: 00:12:37.602 =============== 00:12:37.602 Drivers Enabled 00:12:37.602 =============== 00:12:37.602 00:12:37.602 common: 00:12:37.602 00:12:37.602 bus: 00:12:37.602 pci, vdev, 00:12:37.602 mempool: 00:12:37.602 ring, 00:12:37.602 dma: 00:12:37.602 00:12:37.602 net: 00:12:37.602 00:12:37.602 crypto: 00:12:37.602 00:12:37.602 compress: 00:12:37.602 00:12:37.602 00:12:37.602 Message: 00:12:37.602 ================= 00:12:37.602 Content Skipped 00:12:37.602 ================= 00:12:37.602 00:12:37.602 apps: 00:12:37.602 dumpcap: explicitly disabled via build config 00:12:37.602 graph: explicitly disabled via build config 00:12:37.602 pdump: explicitly disabled via build config 00:12:37.602 proc-info: explicitly disabled via build config 00:12:37.602 test-acl: explicitly disabled via build config 00:12:37.602 test-bbdev: explicitly disabled via build config 00:12:37.602 test-cmdline: explicitly disabled via build config 00:12:37.602 test-compress-perf: explicitly disabled via build config 00:12:37.602 test-crypto-perf: explicitly disabled via build config 00:12:37.602 test-dma-perf: explicitly disabled via build config 00:12:37.602 test-eventdev: explicitly disabled via build config 00:12:37.602 test-fib: explicitly disabled via build config 00:12:37.602 test-flow-perf: explicitly disabled via build config 00:12:37.602 test-gpudev: explicitly disabled via build config 00:12:37.602 test-mldev: explicitly disabled via build config 00:12:37.602 test-pipeline: explicitly disabled via build config 00:12:37.602 test-pmd: explicitly disabled via build config 00:12:37.602 test-regex: explicitly disabled via build config 00:12:37.602 test-sad: explicitly disabled via build config 00:12:37.602 test-security-perf: explicitly disabled via build config 00:12:37.602 00:12:37.602 libs: 00:12:37.602 metrics: explicitly disabled via build config 00:12:37.602 acl: explicitly disabled via build config 00:12:37.602 bbdev: explicitly disabled via build config 00:12:37.602 bitratestats: explicitly disabled via build config 00:12:37.602 bpf: explicitly disabled via build config 00:12:37.602 cfgfile: explicitly disabled via build config 00:12:37.602 distributor: explicitly disabled via build config 00:12:37.602 efd: explicitly disabled via build config 00:12:37.602 eventdev: explicitly disabled via build config 00:12:37.602 dispatcher: explicitly disabled via build config 00:12:37.602 gpudev: explicitly disabled via build config 00:12:37.602 gro: explicitly disabled via build config 00:12:37.602 gso: explicitly disabled via build config 00:12:37.602 ip_frag: explicitly disabled via build config 00:12:37.602 jobstats: explicitly disabled via build config 00:12:37.602 latencystats: explicitly disabled via build config 00:12:37.602 lpm: explicitly disabled via build config 00:12:37.602 member: explicitly disabled via build config 00:12:37.602 pcapng: explicitly disabled via build config 00:12:37.602 power: only supported on Linux 00:12:37.602 rawdev: explicitly disabled via build config 00:12:37.602 regexdev: explicitly disabled via build config 00:12:37.602 mldev: explicitly disabled via build config 00:12:37.602 rib: explicitly disabled via build config 00:12:37.602 sched: explicitly disabled via build config 00:12:37.602 stack: explicitly disabled via build config 00:12:37.602 vhost: only supported on Linux 00:12:37.602 ipsec: explicitly disabled via build config 00:12:37.602 pdcp: explicitly disabled via build config 00:12:37.602 fib: explicitly disabled via build config 00:12:37.602 port: explicitly disabled via build config 00:12:37.602 pdump: explicitly disabled via build config 00:12:37.602 table: explicitly disabled via build config 00:12:37.602 pipeline: explicitly disabled via build config 00:12:37.602 graph: explicitly disabled via build config 00:12:37.602 node: explicitly disabled via build config 00:12:37.602 00:12:37.602 drivers: 00:12:37.602 common/cpt: not in enabled drivers build config 00:12:37.602 common/dpaax: not in enabled drivers build config 00:12:37.602 common/iavf: not in enabled drivers build config 00:12:37.602 common/idpf: not in enabled drivers build config 00:12:37.602 common/mvep: not in enabled drivers build config 00:12:37.602 common/octeontx: not in enabled drivers build config 00:12:37.602 bus/auxiliary: not in enabled drivers build config 00:12:37.602 bus/cdx: not in enabled drivers build config 00:12:37.602 bus/dpaa: not in enabled drivers build config 00:12:37.602 bus/fslmc: not in enabled drivers build config 00:12:37.602 bus/ifpga: not in enabled drivers build config 00:12:37.602 bus/platform: not in enabled drivers build config 00:12:37.602 bus/vmbus: not in enabled drivers build config 00:12:37.602 common/cnxk: not in enabled drivers build config 00:12:37.602 common/mlx5: not in enabled drivers build config 00:12:37.602 common/nfp: not in enabled drivers build config 00:12:37.602 common/qat: not in enabled drivers build config 00:12:37.602 common/sfc_efx: not in enabled drivers build config 00:12:37.602 mempool/bucket: not in enabled drivers build config 00:12:37.602 mempool/cnxk: not in enabled drivers build config 00:12:37.602 mempool/dpaa: not in enabled drivers build config 00:12:37.602 mempool/dpaa2: not in enabled drivers build config 00:12:37.602 mempool/octeontx: not in enabled drivers build config 00:12:37.602 mempool/stack: not in enabled drivers build config 00:12:37.602 dma/cnxk: not in enabled drivers build config 00:12:37.602 dma/dpaa: not in enabled drivers build config 00:12:37.602 dma/dpaa2: not in enabled drivers build config 00:12:37.602 dma/hisilicon: not in enabled drivers build config 00:12:37.602 dma/idxd: not in enabled drivers build config 00:12:37.602 dma/ioat: not in enabled drivers build config 00:12:37.602 dma/skeleton: not in enabled drivers build config 00:12:37.602 net/af_packet: not in enabled drivers build config 00:12:37.602 net/af_xdp: not in enabled drivers build config 00:12:37.602 net/ark: not in enabled drivers build config 00:12:37.602 net/atlantic: not in enabled drivers build config 00:12:37.602 net/avp: not in enabled drivers build config 00:12:37.602 net/axgbe: not in enabled drivers build config 00:12:37.602 net/bnx2x: not in enabled drivers build config 00:12:37.602 net/bnxt: not in enabled drivers build config 00:12:37.602 net/bonding: not in enabled drivers build config 00:12:37.602 net/cnxk: not in enabled drivers build config 00:12:37.602 net/cpfl: not in enabled drivers build config 00:12:37.602 net/cxgbe: not in enabled drivers build config 00:12:37.602 net/dpaa: not in enabled drivers build config 00:12:37.602 net/dpaa2: not in enabled drivers build config 00:12:37.602 net/e1000: not in enabled drivers build config 00:12:37.602 net/ena: not in enabled drivers build config 00:12:37.602 net/enetc: not in enabled drivers build config 00:12:37.602 net/enetfec: not in enabled drivers build config 00:12:37.602 net/enic: not in enabled drivers build config 00:12:37.602 net/failsafe: not in enabled drivers build config 00:12:37.602 net/fm10k: not in enabled drivers build config 00:12:37.602 net/gve: not in enabled drivers build config 00:12:37.602 net/hinic: not in enabled drivers build config 00:12:37.602 net/hns3: not in enabled drivers build config 00:12:37.602 net/i40e: not in enabled drivers build config 00:12:37.603 net/iavf: not in enabled drivers build config 00:12:37.603 net/ice: not in enabled drivers build config 00:12:37.603 net/idpf: not in enabled drivers build config 00:12:37.603 net/igc: not in enabled drivers build config 00:12:37.603 net/ionic: not in enabled drivers build config 00:12:37.603 net/ipn3ke: not in enabled drivers build config 00:12:37.603 net/ixgbe: not in enabled drivers build config 00:12:37.603 net/mana: not in enabled drivers build config 00:12:37.603 net/memif: not in enabled drivers build config 00:12:37.603 net/mlx4: not in enabled drivers build config 00:12:37.603 net/mlx5: not in enabled drivers build config 00:12:37.603 net/mvneta: not in enabled drivers build config 00:12:37.603 net/mvpp2: not in enabled drivers build config 00:12:37.603 net/netvsc: not in enabled drivers build config 00:12:37.603 net/nfb: not in enabled drivers build config 00:12:37.603 net/nfp: not in enabled drivers build config 00:12:37.603 net/ngbe: not in enabled drivers build config 00:12:37.603 net/null: not in enabled drivers build config 00:12:37.603 net/octeontx: not in enabled drivers build config 00:12:37.603 net/octeon_ep: not in enabled drivers build config 00:12:37.603 net/pcap: not in enabled drivers build config 00:12:37.603 net/pfe: not in enabled drivers build config 00:12:37.603 net/qede: not in enabled drivers build config 00:12:37.603 net/ring: not in enabled drivers build config 00:12:37.603 net/sfc: not in enabled drivers build config 00:12:37.603 net/softnic: not in enabled drivers build config 00:12:37.603 net/tap: not in enabled drivers build config 00:12:37.603 net/thunderx: not in enabled drivers build config 00:12:37.603 net/txgbe: not in enabled drivers build config 00:12:37.603 net/vdev_netvsc: not in enabled drivers build config 00:12:37.603 net/vhost: not in enabled drivers build config 00:12:37.603 net/virtio: not in enabled drivers build config 00:12:37.603 net/vmxnet3: not in enabled drivers build config 00:12:37.603 raw/*: missing internal dependency, "rawdev" 00:12:37.603 crypto/armv8: not in enabled drivers build config 00:12:37.603 crypto/bcmfs: not in enabled drivers build config 00:12:37.603 crypto/caam_jr: not in enabled drivers build config 00:12:37.603 crypto/ccp: not in enabled drivers build config 00:12:37.603 crypto/cnxk: not in enabled drivers build config 00:12:37.603 crypto/dpaa_sec: not in enabled drivers build config 00:12:37.603 crypto/dpaa2_sec: not in enabled drivers build config 00:12:37.603 crypto/ipsec_mb: not in enabled drivers build config 00:12:37.603 crypto/mlx5: not in enabled drivers build config 00:12:37.603 crypto/mvsam: not in enabled drivers build config 00:12:37.603 crypto/nitrox: not in enabled drivers build config 00:12:37.603 crypto/null: not in enabled drivers build config 00:12:37.603 crypto/octeontx: not in enabled drivers build config 00:12:37.603 crypto/openssl: not in enabled drivers build config 00:12:37.603 crypto/scheduler: not in enabled drivers build config 00:12:37.603 crypto/uadk: not in enabled drivers build config 00:12:37.603 crypto/virtio: not in enabled drivers build config 00:12:37.603 compress/isal: not in enabled drivers build config 00:12:37.603 compress/mlx5: not in enabled drivers build config 00:12:37.603 compress/octeontx: not in enabled drivers build config 00:12:37.603 compress/zlib: not in enabled drivers build config 00:12:37.603 regex/*: missing internal dependency, "regexdev" 00:12:37.603 ml/*: missing internal dependency, "mldev" 00:12:37.603 vdpa/*: missing internal dependency, "vhost" 00:12:37.603 event/*: missing internal dependency, "eventdev" 00:12:37.603 baseband/*: missing internal dependency, "bbdev" 00:12:37.603 gpu/*: missing internal dependency, "gpudev" 00:12:37.603 00:12:37.603 00:12:37.863 Build targets in project: 81 00:12:37.863 00:12:37.863 DPDK 23.11.0 00:12:37.863 00:12:37.863 User defined options 00:12:37.863 default_library : static 00:12:37.863 libdir : lib 00:12:37.863 prefix : / 00:12:37.863 c_args : -fPIC -Werror 00:12:37.863 c_link_args : 00:12:37.863 cpu_instruction_set: native 00:12:37.863 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:12:37.863 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:12:37.863 enable_docs : false 00:12:37.863 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:12:37.863 enable_kmods : true 00:12:37.863 tests : false 00:12:37.863 00:12:37.863 Found ninja-1.11.1 at /usr/local/bin/ninja 00:12:38.123 ninja: Entering directory `/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:12:38.123 [1/231] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:12:38.123 [2/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:12:38.123 [3/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:12:38.123 [4/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:12:38.123 [5/231] Compiling C object lib/librte_log.a.p/log_log.c.o 00:12:38.381 [6/231] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:12:38.381 [7/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:12:38.381 [8/231] Linking static target lib/librte_log.a 00:12:38.381 [9/231] Linking static target lib/librte_kvargs.a 00:12:38.381 [10/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:12:38.381 [11/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:12:38.381 [12/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:12:38.640 [13/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:12:38.640 [14/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:12:38.640 [15/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:12:38.640 [16/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:12:38.640 [17/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:12:38.640 [18/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:12:38.640 [19/231] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:12:38.640 [20/231] Linking static target lib/librte_telemetry.a 00:12:38.640 [21/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:12:38.640 [22/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:12:38.899 [23/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:12:38.899 [24/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:12:38.899 [25/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:12:38.899 [26/231] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:12:38.899 [27/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:12:38.899 [28/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:12:38.899 [29/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:12:38.899 [30/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:12:38.899 [31/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:12:38.899 [32/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:12:38.899 [33/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:12:38.899 [34/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:12:38.899 [35/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:12:39.159 [36/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:12:39.159 [37/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:12:39.159 [38/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:12:39.159 [39/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:12:39.159 [40/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:12:39.159 [41/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:12:39.473 [42/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:12:39.473 [43/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:12:39.473 [44/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:12:39.473 [45/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:12:39.473 [46/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:12:39.473 [47/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:12:39.473 [48/231] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:12:39.473 [49/231] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:12:39.473 [50/231] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:12:39.473 [51/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:12:39.473 [52/231] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:12:39.473 [53/231] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:12:39.473 [54/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:12:39.473 [55/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:12:39.473 [56/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:12:39.473 [57/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:12:39.473 [58/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:12:39.473 [59/231] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:12:39.733 [60/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:12:39.733 [61/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:12:39.733 [62/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:12:39.733 [63/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:12:39.733 [64/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:12:39.733 [65/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:12:39.733 [66/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:12:39.733 [67/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:12:39.733 [68/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:12:39.733 [69/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:12:39.733 [70/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:12:39.733 [71/231] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:12:39.733 [72/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:12:39.993 [73/231] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:12:39.993 [74/231] Linking static target lib/librte_eal.a 00:12:39.993 [75/231] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:12:39.993 [76/231] Linking static target lib/librte_ring.a 00:12:39.993 [77/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:12:39.993 [78/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:12:39.993 [79/231] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:12:39.993 [80/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:12:39.993 [81/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:12:40.253 [82/231] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.253 [83/231] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.253 [84/231] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:12:40.253 [85/231] Linking static target lib/librte_mempool.a 00:12:40.253 [86/231] Linking target lib/librte_log.so.24.0 00:12:40.253 [87/231] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:12:40.253 [88/231] Linking target lib/librte_kvargs.so.24.0 00:12:40.253 [89/231] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:12:40.253 [90/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:12:40.513 [91/231] Linking static target lib/net/libnet_crc_avx512_lib.a 00:12:40.513 [92/231] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:12:40.513 [93/231] Linking target lib/librte_telemetry.so.24.0 00:12:40.513 [94/231] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.513 [95/231] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:12:40.513 [96/231] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:12:40.513 [97/231] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:12:40.513 [98/231] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:12:40.513 [99/231] Linking static target lib/librte_mbuf.a 00:12:40.513 [100/231] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:12:40.513 [101/231] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:12:40.513 [102/231] Linking static target lib/librte_rcu.a 00:12:40.513 [103/231] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:12:40.513 [104/231] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:12:40.513 [105/231] Linking static target lib/librte_net.a 00:12:40.772 [106/231] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:12:40.772 [107/231] Linking static target lib/librte_meter.a 00:12:40.772 [108/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:12:40.772 [109/231] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.772 [110/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:12:40.773 [111/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:12:40.773 [112/231] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.773 [113/231] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:12:40.773 [114/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:12:41.032 [115/231] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.292 [116/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:12:41.292 [117/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:12:41.292 [118/231] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:12:41.292 [119/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:12:41.292 [120/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:12:41.292 [121/231] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:12:41.292 [122/231] Linking static target lib/librte_pci.a 00:12:41.292 [123/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:12:41.292 [124/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:12:41.292 [125/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:12:41.292 [126/231] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.292 [127/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:12:41.292 [128/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:12:41.292 [129/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:12:41.552 [130/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:12:41.552 [131/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:12:41.552 [132/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:12:41.552 [133/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:12:41.552 [134/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:12:41.552 [135/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:12:41.552 [136/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:12:41.552 [137/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:12:41.552 [138/231] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:12:41.552 [139/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:12:41.552 [140/231] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:12:41.552 [141/231] Linking static target lib/librte_cmdline.a 00:12:41.811 [142/231] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:12:41.811 [143/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:12:41.811 [144/231] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:41.811 [145/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:12:41.811 [146/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:12:42.071 [147/231] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:12:42.071 [148/231] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:12:42.071 [149/231] Linking static target lib/librte_timer.a 00:12:42.071 [150/231] Linking static target lib/librte_compressdev.a 00:12:42.071 [151/231] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:12:42.071 [152/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:12:42.071 [153/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:12:42.330 [154/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:12:42.330 [155/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:12:42.330 [156/231] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.330 [157/231] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:12:42.330 [158/231] Linking static target lib/librte_dmadev.a 00:12:42.330 [159/231] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:12:42.330 [160/231] Linking static target lib/librte_reorder.a 00:12:42.590 [161/231] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:12:42.590 [162/231] Linking static target lib/librte_security.a 00:12:42.590 [163/231] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:12:42.590 [164/231] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.590 [165/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:12:42.590 [166/231] Linking static target lib/librte_hash.a 00:12:42.590 [167/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:12:42.590 [168/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:12:42.590 [169/231] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:12:42.590 [170/231] Linking static target lib/librte_ethdev.a 00:12:42.590 [171/231] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:12:42.590 [172/231] Linking static target drivers/libtmp_rte_bus_pci.a 00:12:42.590 [173/231] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.590 [174/231] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.590 [175/231] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.849 [176/231] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:12:42.849 [177/231] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.849 [178/231] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:42.849 [179/231] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:12:42.849 [180/231] Linking static target drivers/librte_bus_pci.a 00:12:42.849 [181/231] Generating kernel/freebsd/contigmem with a custom command 00:12:42.849 machine -> /usr/src/sys/amd64/include 00:12:42.849 x86 -> /usr/src/sys/x86/include 00:12:42.849 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:12:42.849 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:12:42.849 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:12:42.849 touch opt_global.h 00:12:42.849 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:12:42.849 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:12:42.849 :> export_syms 00:12:42.849 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:12:42.849 objcopy --strip-debug contigmem.ko 00:12:42.849 [182/231] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:12:42.849 [183/231] Linking static target drivers/libtmp_rte_bus_vdev.a 00:12:42.849 [184/231] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:12:42.849 [185/231] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:12:42.849 [186/231] Linking static target lib/librte_cryptodev.a 00:12:43.108 [187/231] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:12:43.108 [188/231] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:43.108 [189/231] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:12:43.108 [190/231] Linking static target drivers/librte_bus_vdev.a 00:12:43.108 [191/231] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.108 [192/231] Generating kernel/freebsd/nic_uio with a custom command 00:12:43.109 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/usr/home/vagrant/spdk_repo/spdk/dpdk/config -include /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wredundant-decls -Wnested-externs -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-error=unused-but-set-variable -Wno-format-zero-length -mno-aes -mno-avx -std=iso9899:1999 -c /usr/home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:12:43.109 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:12:43.109 :> export_syms 00:12:43.109 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:12:43.109 objcopy --strip-debug nic_uio.ko 00:12:43.109 [193/231] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:43.368 [194/231] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:12:43.368 [195/231] Linking static target drivers/libtmp_rte_mempool_ring.a 00:12:43.368 [196/231] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:12:43.368 [197/231] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:43.368 [198/231] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:12:43.368 [199/231] Linking static target drivers/librte_mempool_ring.a 00:12:44.753 [200/231] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:51.312 [201/231] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:12:53.212 [202/231] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:12:53.212 [203/231] Linking target lib/librte_eal.so.24.0 00:12:53.212 [204/231] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:12:53.212 [205/231] Linking target lib/librte_meter.so.24.0 00:12:53.212 [206/231] Linking target lib/librte_ring.so.24.0 00:12:53.212 [207/231] Linking target lib/librte_pci.so.24.0 00:12:53.212 [208/231] Linking target lib/librte_dmadev.so.24.0 00:12:53.212 [209/231] Linking target lib/librte_timer.so.24.0 00:12:53.212 [210/231] Linking target drivers/librte_bus_vdev.so.24.0 00:12:53.212 [211/231] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:12:53.212 [212/231] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:12:53.470 [213/231] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:12:53.470 [214/231] Linking target lib/librte_mempool.so.24.0 00:12:53.470 [215/231] Linking target lib/librte_rcu.so.24.0 00:12:53.470 [216/231] Linking target drivers/librte_bus_pci.so.24.0 00:12:53.470 [217/231] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:12:53.470 [218/231] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:12:53.470 [219/231] Linking target drivers/librte_mempool_ring.so.24.0 00:12:53.470 [220/231] Linking target lib/librte_mbuf.so.24.0 00:12:53.729 [221/231] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:12:53.729 [222/231] Linking target lib/librte_net.so.24.0 00:12:53.729 [223/231] Linking target lib/librte_reorder.so.24.0 00:12:53.729 [224/231] Linking target lib/librte_compressdev.so.24.0 00:12:53.729 [225/231] Linking target lib/librte_cryptodev.so.24.0 00:12:53.729 [226/231] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:12:53.729 [227/231] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:12:53.988 [228/231] Linking target lib/librte_cmdline.so.24.0 00:12:53.988 [229/231] Linking target lib/librte_hash.so.24.0 00:12:53.988 [230/231] Linking target lib/librte_security.so.24.0 00:12:53.988 [231/231] Linking target lib/librte_ethdev.so.24.0 00:12:53.988 INFO: autodetecting backend as ninja 00:12:53.988 INFO: calculating backend command to run: /usr/local/bin/ninja -C /usr/home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:12:54.556 CC lib/ut/ut.o 00:12:54.556 CC lib/ut_mock/mock.o 00:12:54.556 CC lib/log/log.o 00:12:54.556 CC lib/log/log_flags.o 00:12:54.556 CC lib/log/log_deprecated.o 00:12:54.556 LIB libspdk_ut_mock.a 00:12:54.556 LIB libspdk_log.a 00:12:54.556 LIB libspdk_ut.a 00:12:54.814 CC lib/dma/dma.o 00:12:54.814 CXX lib/trace_parser/trace.o 00:12:54.814 CC lib/util/bit_array.o 00:12:54.814 CC lib/util/base64.o 00:12:54.814 CC lib/util/cpuset.o 00:12:54.814 CC lib/util/crc16.o 00:12:54.814 CC lib/util/crc32c.o 00:12:54.814 CC lib/util/crc32.o 00:12:54.814 CC lib/util/crc32_ieee.o 00:12:54.814 CC lib/ioat/ioat.o 00:12:54.814 CC lib/util/crc64.o 00:12:54.814 CC lib/util/dif.o 00:12:54.814 LIB libspdk_dma.a 00:12:54.814 CC lib/util/fd.o 00:12:54.814 CC lib/util/file.o 00:12:54.814 CC lib/util/hexlify.o 00:12:54.814 CC lib/util/iov.o 00:12:55.099 CC lib/util/math.o 00:12:55.099 CC lib/util/pipe.o 00:12:55.099 CC lib/util/strerror_tls.o 00:12:55.099 CC lib/util/string.o 00:12:55.099 CC lib/util/uuid.o 00:12:55.099 CC lib/util/fd_group.o 00:12:55.099 LIB libspdk_ioat.a 00:12:55.099 CC lib/util/xor.o 00:12:55.099 CC lib/util/zipf.o 00:12:55.357 LIB libspdk_util.a 00:12:55.357 LIB libspdk_trace_parser.a 00:12:55.614 CC lib/conf/conf.o 00:12:55.614 CC lib/env_dpdk/env.o 00:12:55.614 CC lib/env_dpdk/pci.o 00:12:55.614 CC lib/env_dpdk/memory.o 00:12:55.614 CC lib/env_dpdk/init.o 00:12:55.614 CC lib/env_dpdk/threads.o 00:12:55.614 CC lib/idxd/idxd.o 00:12:55.614 CC lib/rdma/common.o 00:12:55.614 CC lib/vmd/vmd.o 00:12:55.614 CC lib/json/json_parse.o 00:12:55.614 CC lib/vmd/led.o 00:12:55.614 CC lib/rdma/rdma_verbs.o 00:12:55.614 LIB libspdk_conf.a 00:12:55.614 CC lib/idxd/idxd_user.o 00:12:55.614 CC lib/json/json_util.o 00:12:55.614 CC lib/json/json_write.o 00:12:55.614 CC lib/env_dpdk/pci_ioat.o 00:12:55.614 CC lib/env_dpdk/pci_virtio.o 00:12:55.614 LIB libspdk_rdma.a 00:12:55.870 CC lib/env_dpdk/pci_vmd.o 00:12:55.870 CC lib/env_dpdk/pci_idxd.o 00:12:55.870 CC lib/env_dpdk/pci_event.o 00:12:55.870 CC lib/env_dpdk/sigbus_handler.o 00:12:55.870 LIB libspdk_idxd.a 00:12:55.870 CC lib/env_dpdk/pci_dpdk.o 00:12:55.870 CC lib/env_dpdk/pci_dpdk_2207.o 00:12:55.870 CC lib/env_dpdk/pci_dpdk_2211.o 00:12:55.870 LIB libspdk_vmd.a 00:12:55.870 LIB libspdk_json.a 00:12:56.128 CC lib/jsonrpc/jsonrpc_server.o 00:12:56.128 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:12:56.128 CC lib/jsonrpc/jsonrpc_client.o 00:12:56.128 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:12:56.128 LIB libspdk_jsonrpc.a 00:12:56.385 LIB libspdk_env_dpdk.a 00:12:56.385 CC lib/rpc/rpc.o 00:12:56.385 LIB libspdk_rpc.a 00:12:56.643 CC lib/trace/trace.o 00:12:56.643 CC lib/trace/trace_flags.o 00:12:56.643 CC lib/trace/trace_rpc.o 00:12:56.643 CC lib/notify/notify.o 00:12:56.643 CC lib/notify/notify_rpc.o 00:12:56.643 CC lib/sock/sock.o 00:12:56.643 CC lib/sock/sock_rpc.o 00:12:56.643 LIB libspdk_notify.a 00:12:56.902 LIB libspdk_trace.a 00:12:56.902 LIB libspdk_sock.a 00:12:56.902 CC lib/thread/iobuf.o 00:12:56.902 CC lib/thread/thread.o 00:12:57.160 CC lib/nvme/nvme_ctrlr_cmd.o 00:12:57.160 CC lib/nvme/nvme_fabric.o 00:12:57.160 CC lib/nvme/nvme_ctrlr.o 00:12:57.160 CC lib/nvme/nvme_ns_cmd.o 00:12:57.160 CC lib/nvme/nvme_ns.o 00:12:57.160 CC lib/nvme/nvme_pcie_common.o 00:12:57.160 CC lib/nvme/nvme_qpair.o 00:12:57.160 CC lib/nvme/nvme_pcie.o 00:12:57.160 CC lib/nvme/nvme.o 00:12:57.418 CC lib/nvme/nvme_quirks.o 00:12:57.418 CC lib/nvme/nvme_transport.o 00:12:57.418 CC lib/nvme/nvme_discovery.o 00:12:57.418 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:12:57.418 LIB libspdk_thread.a 00:12:57.418 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:12:57.677 CC lib/nvme/nvme_tcp.o 00:12:57.677 CC lib/accel/accel.o 00:12:57.677 CC lib/blob/blobstore.o 00:12:57.677 CC lib/init/json_config.o 00:12:57.677 CC lib/init/subsystem.o 00:12:57.677 CC lib/accel/accel_rpc.o 00:12:57.677 CC lib/init/subsystem_rpc.o 00:12:57.677 CC lib/blob/request.o 00:12:57.677 CC lib/accel/accel_sw.o 00:12:57.677 CC lib/init/rpc.o 00:12:57.677 CC lib/blob/zeroes.o 00:12:57.677 CC lib/nvme/nvme_opal.o 00:12:57.678 CC lib/blob/blob_bs_dev.o 00:12:57.936 CC lib/nvme/nvme_io_msg.o 00:12:57.936 LIB libspdk_init.a 00:12:57.936 CC lib/nvme/nvme_poll_group.o 00:12:57.936 CC lib/nvme/nvme_zns.o 00:12:57.936 CC lib/nvme/nvme_cuse.o 00:12:57.936 CC lib/nvme/nvme_rdma.o 00:12:57.936 LIB libspdk_accel.a 00:12:57.936 CC lib/event/app.o 00:12:57.936 CC lib/event/reactor.o 00:12:57.936 CC lib/bdev/bdev.o 00:12:58.194 CC lib/bdev/bdev_rpc.o 00:12:58.194 CC lib/event/log_rpc.o 00:12:58.194 CC lib/event/app_rpc.o 00:12:58.194 CC lib/event/scheduler_static.o 00:12:58.194 CC lib/bdev/bdev_zone.o 00:12:58.194 CC lib/bdev/part.o 00:12:58.194 CC lib/bdev/scsi_nvme.o 00:12:58.194 LIB libspdk_event.a 00:12:58.452 LIB libspdk_nvme.a 00:12:58.452 LIB libspdk_blob.a 00:12:58.711 CC lib/lvol/lvol.o 00:12:58.711 CC lib/blobfs/blobfs.o 00:12:58.711 CC lib/blobfs/tree.o 00:12:58.969 LIB libspdk_bdev.a 00:12:58.969 LIB libspdk_lvol.a 00:12:58.970 LIB libspdk_blobfs.a 00:12:59.229 CC lib/scsi/dev.o 00:12:59.229 CC lib/scsi/lun.o 00:12:59.229 CC lib/scsi/scsi_bdev.o 00:12:59.229 CC lib/scsi/port.o 00:12:59.229 CC lib/scsi/scsi.o 00:12:59.229 CC lib/scsi/task.o 00:12:59.229 CC lib/scsi/scsi_pr.o 00:12:59.229 CC lib/scsi/scsi_rpc.o 00:12:59.229 CC lib/nvmf/ctrlr.o 00:12:59.229 CC lib/nvmf/ctrlr_discovery.o 00:12:59.229 CC lib/nvmf/ctrlr_bdev.o 00:12:59.229 CC lib/nvmf/subsystem.o 00:12:59.229 CC lib/nvmf/nvmf.o 00:12:59.229 CC lib/nvmf/nvmf_rpc.o 00:12:59.229 CC lib/nvmf/transport.o 00:12:59.229 CC lib/nvmf/tcp.o 00:12:59.229 CC lib/nvmf/rdma.o 00:12:59.229 LIB libspdk_scsi.a 00:12:59.488 CC lib/iscsi/conn.o 00:12:59.488 CC lib/iscsi/init_grp.o 00:12:59.488 CC lib/iscsi/iscsi.o 00:12:59.488 CC lib/iscsi/param.o 00:12:59.488 CC lib/iscsi/md5.o 00:12:59.488 CC lib/iscsi/portal_grp.o 00:12:59.488 CC lib/iscsi/tgt_node.o 00:12:59.488 CC lib/iscsi/iscsi_subsystem.o 00:12:59.488 CC lib/iscsi/iscsi_rpc.o 00:12:59.488 CC lib/iscsi/task.o 00:13:00.053 LIB libspdk_nvmf.a 00:13:00.053 LIB libspdk_iscsi.a 00:13:00.310 CC module/env_dpdk/env_dpdk_rpc.o 00:13:00.310 CC module/sock/posix/posix.o 00:13:00.310 CC module/accel/error/accel_error.o 00:13:00.310 CC module/accel/error/accel_error_rpc.o 00:13:00.310 CC module/scheduler/dynamic/scheduler_dynamic.o 00:13:00.310 CC module/blob/bdev/blob_bdev.o 00:13:00.310 CC module/accel/ioat/accel_ioat.o 00:13:00.310 CC module/accel/ioat/accel_ioat_rpc.o 00:13:00.310 CC module/accel/iaa/accel_iaa.o 00:13:00.310 CC module/accel/dsa/accel_dsa.o 00:13:00.310 LIB libspdk_env_dpdk_rpc.a 00:13:00.310 CC module/accel/iaa/accel_iaa_rpc.o 00:13:00.310 CC module/accel/dsa/accel_dsa_rpc.o 00:13:00.568 LIB libspdk_accel_error.a 00:13:00.568 LIB libspdk_accel_ioat.a 00:13:00.568 LIB libspdk_blob_bdev.a 00:13:00.568 LIB libspdk_scheduler_dynamic.a 00:13:00.568 LIB libspdk_accel_iaa.a 00:13:00.568 LIB libspdk_accel_dsa.a 00:13:00.568 CC module/bdev/nvme/bdev_nvme.o 00:13:00.568 CC module/bdev/malloc/bdev_malloc.o 00:13:00.568 CC module/bdev/lvol/vbdev_lvol.o 00:13:00.568 CC module/bdev/delay/vbdev_delay.o 00:13:00.568 CC module/bdev/null/bdev_null.o 00:13:00.568 CC module/bdev/passthru/vbdev_passthru.o 00:13:00.568 CC module/bdev/error/vbdev_error.o 00:13:00.568 CC module/blobfs/bdev/blobfs_bdev.o 00:13:00.568 CC module/bdev/gpt/gpt.o 00:13:00.568 LIB libspdk_sock_posix.a 00:13:00.568 CC module/bdev/error/vbdev_error_rpc.o 00:13:00.568 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:13:00.568 CC module/bdev/gpt/vbdev_gpt.o 00:13:00.568 CC module/bdev/null/bdev_null_rpc.o 00:13:00.826 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:13:00.826 CC module/bdev/malloc/bdev_malloc_rpc.o 00:13:00.826 CC module/bdev/delay/vbdev_delay_rpc.o 00:13:00.826 LIB libspdk_bdev_error.a 00:13:00.826 LIB libspdk_blobfs_bdev.a 00:13:00.826 CC module/bdev/raid/bdev_raid.o 00:13:00.826 CC module/bdev/nvme/bdev_nvme_rpc.o 00:13:00.826 LIB libspdk_bdev_null.a 00:13:00.826 CC module/bdev/raid/bdev_raid_rpc.o 00:13:00.826 LIB libspdk_bdev_passthru.a 00:13:00.826 CC module/bdev/nvme/nvme_rpc.o 00:13:00.826 LIB libspdk_bdev_malloc.a 00:13:00.826 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:13:00.826 CC module/bdev/nvme/bdev_mdns_client.o 00:13:00.826 LIB libspdk_bdev_gpt.a 00:13:00.826 LIB libspdk_bdev_delay.a 00:13:00.826 CC module/bdev/raid/bdev_raid_sb.o 00:13:00.826 CC module/bdev/raid/raid0.o 00:13:00.826 CC module/bdev/split/vbdev_split.o 00:13:00.826 CC module/bdev/raid/raid1.o 00:13:00.826 CC module/bdev/split/vbdev_split_rpc.o 00:13:00.826 CC module/bdev/zone_block/vbdev_zone_block.o 00:13:00.826 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:13:00.826 CC module/bdev/aio/bdev_aio.o 00:13:00.826 LIB libspdk_bdev_lvol.a 00:13:00.826 CC module/bdev/aio/bdev_aio_rpc.o 00:13:00.826 CC module/bdev/raid/concat.o 00:13:01.087 LIB libspdk_bdev_split.a 00:13:01.087 LIB libspdk_bdev_aio.a 00:13:01.087 LIB libspdk_bdev_zone_block.a 00:13:01.087 LIB libspdk_bdev_raid.a 00:13:01.346 LIB libspdk_bdev_nvme.a 00:13:01.913 CC module/event/subsystems/vmd/vmd.o 00:13:01.913 CC module/event/subsystems/vmd/vmd_rpc.o 00:13:01.913 CC module/event/subsystems/sock/sock.o 00:13:01.913 CC module/event/subsystems/iobuf/iobuf.o 00:13:01.913 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:13:01.913 CC module/event/subsystems/scheduler/scheduler.o 00:13:01.913 LIB libspdk_event_vmd.a 00:13:01.913 LIB libspdk_event_sock.a 00:13:01.913 LIB libspdk_event_scheduler.a 00:13:01.913 LIB libspdk_event_iobuf.a 00:13:02.173 CC module/event/subsystems/accel/accel.o 00:13:02.173 LIB libspdk_event_accel.a 00:13:02.432 CC module/event/subsystems/bdev/bdev.o 00:13:02.432 LIB libspdk_event_bdev.a 00:13:02.432 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:13:02.432 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:13:02.691 CC module/event/subsystems/scsi/scsi.o 00:13:02.691 LIB libspdk_event_scsi.a 00:13:02.691 LIB libspdk_event_nvmf.a 00:13:02.951 CC module/event/subsystems/iscsi/iscsi.o 00:13:02.951 LIB libspdk_event_iscsi.a 00:13:02.951 CXX app/trace/trace.o 00:13:03.211 CC examples/ioat/perf/perf.o 00:13:03.211 CC examples/nvme/hello_world/hello_world.o 00:13:03.211 CC examples/accel/perf/accel_perf.o 00:13:03.211 CC test/app/bdev_svc/bdev_svc.o 00:13:03.211 CC test/blobfs/mkfs/mkfs.o 00:13:03.211 CC test/bdev/bdevio/bdevio.o 00:13:03.211 CC test/accel/dif/dif.o 00:13:03.211 CC examples/blob/hello_world/hello_blob.o 00:13:03.211 CC examples/bdev/hello_world/hello_bdev.o 00:13:03.211 LINK bdev_svc 00:13:03.211 LINK ioat_perf 00:13:03.211 LINK hello_world 00:13:03.211 LINK mkfs 00:13:03.211 LINK hello_bdev 00:13:03.211 LINK hello_blob 00:13:03.211 LINK dif 00:13:03.211 LINK bdevio 00:13:03.211 LINK accel_perf 00:13:03.470 LINK spdk_trace 00:13:04.039 CC app/trace_record/trace_record.o 00:13:04.039 CC examples/ioat/verify/verify.o 00:13:04.039 LINK verify 00:13:04.039 LINK spdk_trace_record 00:13:04.299 CC app/nvmf_tgt/nvmf_main.o 00:13:04.299 LINK nvmf_tgt 00:13:04.868 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:13:04.869 LINK nvme_fuzz 00:13:06.773 CC examples/nvme/reconnect/reconnect.o 00:13:07.033 LINK reconnect 00:13:07.602 TEST_HEADER include/spdk/config.h 00:13:07.602 CXX test/cpp_headers/accel.o 00:13:07.862 CXX test/cpp_headers/accel_module.o 00:13:07.862 CXX test/cpp_headers/assert.o 00:13:07.862 CXX test/cpp_headers/barrier.o 00:13:08.120 CXX test/cpp_headers/base64.o 00:13:08.120 CXX test/cpp_headers/bdev.o 00:13:08.120 CXX test/cpp_headers/bdev_module.o 00:13:08.378 CXX test/cpp_headers/bdev_zone.o 00:13:08.378 CXX test/cpp_headers/bit_array.o 00:13:08.637 CXX test/cpp_headers/bit_pool.o 00:13:08.637 CXX test/cpp_headers/blob.o 00:13:08.637 CXX test/cpp_headers/blob_bdev.o 00:13:08.896 CXX test/cpp_headers/blobfs.o 00:13:08.896 CXX test/cpp_headers/blobfs_bdev.o 00:13:09.156 CXX test/cpp_headers/conf.o 00:13:09.156 CXX test/cpp_headers/config.o 00:13:09.156 CXX test/cpp_headers/cpuset.o 00:13:09.156 CXX test/cpp_headers/crc16.o 00:13:09.415 CXX test/cpp_headers/crc32.o 00:13:09.416 CXX test/cpp_headers/crc64.o 00:13:09.713 CXX test/cpp_headers/dif.o 00:13:09.713 CXX test/cpp_headers/dma.o 00:13:09.713 CXX test/cpp_headers/endian.o 00:13:09.997 CXX test/cpp_headers/env.o 00:13:09.997 CXX test/cpp_headers/env_dpdk.o 00:13:09.997 CXX test/cpp_headers/event.o 00:13:10.257 CXX test/cpp_headers/fd.o 00:13:10.257 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:13:10.257 CXX test/cpp_headers/fd_group.o 00:13:10.257 CXX test/cpp_headers/file.o 00:13:10.515 CXX test/cpp_headers/ftl.o 00:13:10.515 CXX test/cpp_headers/gpt_spec.o 00:13:10.775 CXX test/cpp_headers/hexlify.o 00:13:10.775 CXX test/cpp_headers/histogram_data.o 00:13:10.775 CXX test/cpp_headers/idxd.o 00:13:10.775 CC examples/nvme/nvme_manage/nvme_manage.o 00:13:11.035 CXX test/cpp_headers/idxd_spec.o 00:13:11.035 LINK iscsi_fuzz 00:13:11.035 CXX test/cpp_headers/init.o 00:13:11.294 CXX test/cpp_headers/ioat.o 00:13:11.294 LINK nvme_manage 00:13:11.294 CXX test/cpp_headers/ioat_spec.o 00:13:11.294 CXX test/cpp_headers/iscsi_spec.o 00:13:11.553 CXX test/cpp_headers/json.o 00:13:11.553 CXX test/cpp_headers/jsonrpc.o 00:13:11.553 CXX test/cpp_headers/likely.o 00:13:11.813 CXX test/cpp_headers/log.o 00:13:11.813 CXX test/cpp_headers/lvol.o 00:13:12.072 CXX test/cpp_headers/memory.o 00:13:12.072 CXX test/cpp_headers/mmio.o 00:13:12.072 CXX test/cpp_headers/nbd.o 00:13:12.072 CXX test/cpp_headers/notify.o 00:13:12.331 CXX test/cpp_headers/nvme.o 00:13:12.331 CXX test/cpp_headers/nvme_intel.o 00:13:12.591 CXX test/cpp_headers/nvme_ocssd.o 00:13:12.591 CXX test/cpp_headers/nvme_ocssd_spec.o 00:13:12.591 CXX test/cpp_headers/nvme_spec.o 00:13:12.850 CXX test/cpp_headers/nvme_zns.o 00:13:12.850 CXX test/cpp_headers/nvmf.o 00:13:13.109 CXX test/cpp_headers/nvmf_cmd.o 00:13:13.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:13:13.109 CXX test/cpp_headers/nvmf_spec.o 00:13:13.369 CXX test/cpp_headers/nvmf_transport.o 00:13:13.369 CXX test/cpp_headers/opal.o 00:13:13.628 CXX test/cpp_headers/opal_spec.o 00:13:13.628 CXX test/cpp_headers/pci_ids.o 00:13:13.628 CXX test/cpp_headers/pipe.o 00:13:13.887 CXX test/cpp_headers/queue.o 00:13:13.887 CXX test/cpp_headers/reduce.o 00:13:13.887 CXX test/cpp_headers/rpc.o 00:13:14.146 CXX test/cpp_headers/scheduler.o 00:13:14.146 CXX test/cpp_headers/scsi.o 00:13:14.146 CXX test/cpp_headers/scsi_spec.o 00:13:14.406 CXX test/cpp_headers/sock.o 00:13:14.406 CXX test/cpp_headers/stdinc.o 00:13:14.406 CXX test/cpp_headers/string.o 00:13:14.665 CXX test/cpp_headers/thread.o 00:13:14.665 CXX test/cpp_headers/trace.o 00:13:14.665 CXX test/cpp_headers/trace_parser.o 00:13:14.925 CXX test/cpp_headers/tree.o 00:13:14.925 CXX test/cpp_headers/ublk.o 00:13:14.925 CXX test/cpp_headers/util.o 00:13:14.925 CXX test/cpp_headers/uuid.o 00:13:15.184 CXX test/cpp_headers/version.o 00:13:15.184 CXX test/cpp_headers/vfio_user_pci.o 00:13:15.184 CXX test/cpp_headers/vfio_user_spec.o 00:13:15.451 CXX test/cpp_headers/vhost.o 00:13:15.451 CXX test/cpp_headers/vmd.o 00:13:15.451 CC test/dma/test_dma/test_dma.o 00:13:15.451 CXX test/cpp_headers/xor.o 00:13:15.718 CXX test/cpp_headers/zipf.o 00:13:15.718 LINK test_dma 00:13:15.718 CC examples/nvme/arbitration/arbitration.o 00:13:15.718 CC examples/blob/cli/blobcli.o 00:13:15.976 LINK arbitration 00:13:15.976 LINK blobcli 00:13:18.514 CC app/iscsi_tgt/iscsi_tgt.o 00:13:18.514 LINK iscsi_tgt 00:13:18.514 CC test/app/histogram_perf/histogram_perf.o 00:13:18.514 CC examples/nvme/hotplug/hotplug.o 00:13:18.514 LINK histogram_perf 00:13:18.514 CC examples/sock/hello_world/hello_sock.o 00:13:18.514 LINK hotplug 00:13:18.514 CC examples/bdev/bdevperf/bdevperf.o 00:13:18.514 LINK hello_sock 00:13:18.774 CC examples/nvme/cmb_copy/cmb_copy.o 00:13:18.774 LINK cmb_copy 00:13:19.033 LINK bdevperf 00:13:19.033 CC test/app/jsoncat/jsoncat.o 00:13:19.292 LINK jsoncat 00:13:19.861 CC test/app/stub/stub.o 00:13:19.861 LINK stub 00:13:19.861 CC examples/vmd/lsvmd/lsvmd.o 00:13:20.120 LINK lsvmd 00:13:21.499 CC app/spdk_tgt/spdk_tgt.o 00:13:21.499 LINK spdk_tgt 00:13:22.436 CC test/env/mem_callbacks/mem_callbacks.o 00:13:22.436 CC examples/nvme/abort/abort.o 00:13:22.436 LINK abort 00:13:22.436 LINK mem_callbacks 00:13:22.696 CC test/env/vtophys/vtophys.o 00:13:22.696 LINK vtophys 00:13:22.955 CC examples/vmd/led/led.o 00:13:22.955 LINK led 00:13:23.214 CC test/event/event_perf/event_perf.o 00:13:23.473 LINK event_perf 00:13:23.473 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:13:23.473 LINK env_dpdk_post_init 00:13:24.851 CC app/spdk_lspci/spdk_lspci.o 00:13:24.851 LINK spdk_lspci 00:13:25.789 CC test/event/reactor/reactor.o 00:13:25.789 LINK reactor 00:13:26.048 CC test/env/memory/memory_ut.o 00:13:26.616 LINK memory_ut 00:13:26.616 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:13:26.616 LINK pmr_persistence 00:13:26.616 CC app/spdk_nvme_perf/perf.o 00:13:26.874 CC test/env/pci/pci_ut.o 00:13:27.133 LINK spdk_nvme_perf 00:13:27.133 LINK pci_ut 00:13:27.417 CC test/event/reactor_perf/reactor_perf.o 00:13:27.417 LINK reactor_perf 00:13:27.985 CC app/spdk_nvme_identify/identify.o 00:13:28.244 gmake[2]: Nothing to be done for 'all'. 00:13:28.244 CC examples/nvmf/nvmf/nvmf.o 00:13:28.244 LINK spdk_nvme_identify 00:13:28.504 LINK nvmf 00:13:29.883 CC app/spdk_nvme_discover/discovery_aer.o 00:13:29.883 LINK spdk_nvme_discover 00:13:30.451 CC test/nvme/aer/aer.o 00:13:30.452 CC test/rpc_client/rpc_client_test.o 00:13:30.452 LINK aer 00:13:30.452 LINK rpc_client_test 00:13:31.020 CC test/nvme/reset/reset.o 00:13:31.020 CC app/spdk_top/spdk_top.o 00:13:31.280 LINK reset 00:13:31.540 LINK spdk_top 00:13:31.799 CC test/nvme/sgl/sgl.o 00:13:31.799 LINK sgl 00:13:32.367 CC examples/util/zipf/zipf.o 00:13:32.627 LINK zipf 00:13:32.627 CC test/thread/poller_perf/poller_perf.o 00:13:32.887 LINK poller_perf 00:13:32.887 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:13:32.887 LINK histogram_ut 00:13:33.147 CC test/unit/lib/accel/accel.c/accel_ut.o 00:13:33.440 CC examples/thread/thread/thread_ex.o 00:13:33.440 LINK thread 00:13:34.007 CC examples/idxd/perf/perf.o 00:13:34.007 LINK accel_ut 00:13:34.266 LINK idxd_perf 00:13:34.834 CC test/thread/lock/spdk_lock.o 00:13:34.834 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:13:34.834 CC app/fio/nvme/fio_plugin.o 00:13:35.094 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:13:35.094 struct spdk_nvme_fdp_ruhs ruhs; 00:13:35.094 ^ 00:13:35.094 1 warning generated. 00:13:35.094 LINK spdk_lock 00:13:35.094 LINK spdk_nvme 00:13:35.094 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:13:35.094 CC test/nvme/e2edp/nvme_dp.o 00:13:35.353 LINK nvme_dp 00:13:35.353 LINK blob_bdev_ut 00:13:35.613 CC app/fio/bdev/fio_plugin.o 00:13:35.872 CC test/unit/lib/blob/blob.c/blob_ut.o 00:13:35.872 LINK spdk_bdev 00:13:35.872 CC test/nvme/overhead/overhead.o 00:13:36.131 LINK overhead 00:13:37.070 LINK bdev_ut 00:13:37.329 CC test/nvme/err_injection/err_injection.o 00:13:37.588 LINK err_injection 00:13:38.967 CC test/unit/lib/bdev/part.c/part_ut.o 00:13:39.536 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:13:39.536 LINK tree_ut 00:13:39.795 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:13:40.055 LINK part_ut 00:13:40.055 CC test/unit/lib/dma/dma.c/dma_ut.o 00:13:40.314 LINK dma_ut 00:13:40.314 LINK blobfs_async_ut 00:13:40.574 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:13:40.574 LINK scsi_nvme_ut 00:13:40.574 LINK blob_ut 00:13:40.833 CC test/nvme/startup/startup.o 00:13:40.834 LINK startup 00:13:41.093 CC test/unit/lib/event/app.c/app_ut.o 00:13:41.093 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:13:41.352 LINK app_ut 00:13:41.352 LINK gpt_ut 00:13:41.612 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:13:41.612 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:13:41.871 LINK reactor_ut 00:13:42.131 LINK vbdev_lvol_ut 00:13:42.390 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:13:42.649 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:13:42.909 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:13:42.909 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:13:42.909 LINK blobfs_bdev_ut 00:13:42.909 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:13:42.909 LINK blobfs_sync_ut 00:13:42.909 LINK ioat_ut 00:13:43.168 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:13:43.168 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:13:43.428 LINK init_grp_ut 00:13:43.428 LINK conn_ut 00:13:43.687 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:13:43.687 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:13:43.687 LINK bdev_raid_ut 00:13:43.946 LINK bdev_zone_ut 00:13:43.946 CC test/nvme/reserve/reserve.o 00:13:43.946 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:13:44.206 LINK reserve 00:13:44.206 LINK bdev_ut 00:13:44.775 LINK json_parse_ut 00:13:44.775 LINK iscsi_ut 00:13:45.034 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:13:45.034 LINK bdev_raid_sb_ut 00:13:45.293 CC test/nvme/simple_copy/simple_copy.o 00:13:45.293 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:13:45.293 LINK simple_copy 00:13:45.553 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:13:45.553 LINK concat_ut 00:13:45.812 LINK json_util_ut 00:13:46.071 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:13:46.071 CC test/unit/lib/iscsi/param.c/param_ut.o 00:13:46.330 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:13:46.330 LINK raid1_ut 00:13:46.330 LINK param_ut 00:13:46.330 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:13:46.330 LINK jsonrpc_server_ut 00:13:46.590 CC test/nvme/connect_stress/connect_stress.o 00:13:46.590 LINK connect_stress 00:13:46.590 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:13:46.850 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:13:46.850 CC test/nvme/boot_partition/boot_partition.o 00:13:46.850 LINK boot_partition 00:13:46.850 LINK portal_grp_ut 00:13:47.109 LINK json_write_ut 00:13:47.109 LINK vbdev_zone_block_ut 00:13:47.369 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:13:47.629 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:13:47.629 LINK tgt_node_ut 00:13:47.888 CC test/unit/lib/log/log.c/log_ut.o 00:13:47.888 CC test/nvme/compliance/nvme_compliance.o 00:13:47.888 LINK log_ut 00:13:47.888 CC test/nvme/fused_ordering/fused_ordering.o 00:13:48.148 CC test/nvme/doorbell_aers/doorbell_aers.o 00:13:48.148 LINK fused_ordering 00:13:48.148 CC test/nvme/fdp/fdp.o 00:13:48.148 LINK nvme_compliance 00:13:48.148 LINK doorbell_aers 00:13:48.148 LINK fdp 00:13:49.087 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:13:49.656 LINK bdev_nvme_ut 00:13:49.915 CC test/unit/lib/notify/notify.c/notify_ut.o 00:13:49.915 LINK notify_ut 00:13:49.915 LINK lvol_ut 00:13:50.174 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:13:50.174 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:13:50.743 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:13:50.743 LINK nvme_ut 00:13:51.336 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:13:51.336 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:13:51.336 LINK dev_ut 00:13:51.607 LINK tcp_ut 00:13:51.607 CC test/unit/lib/sock/sock.c/sock_ut.o 00:13:51.866 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:13:52.126 CC test/unit/lib/thread/thread.c/thread_ut.o 00:13:52.126 LINK ctrlr_ut 00:13:52.126 LINK lun_ut 00:13:52.126 LINK nvme_ctrlr_ut 00:13:52.386 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:13:52.386 LINK sock_ut 00:13:52.386 CC test/unit/lib/sock/posix.c/posix_ut.o 00:13:52.647 LINK posix_ut 00:13:52.647 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:13:52.907 LINK scsi_ut 00:13:52.907 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:13:52.907 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:13:52.907 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:13:52.907 LINK nvme_ctrlr_cmd_ut 00:13:52.907 LINK thread_ut 00:13:53.166 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:13:53.166 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:13:53.424 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:13:53.424 LINK nvme_ctrlr_ocssd_cmd_ut 00:13:53.424 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:13:53.424 LINK scsi_bdev_ut 00:13:53.424 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:13:53.424 CC test/unit/lib/util/base64.c/base64_ut.o 00:13:53.424 LINK iobuf_ut 00:13:53.683 LINK base64_ut 00:13:53.683 LINK ctrlr_bdev_ut 00:13:53.683 LINK scsi_pr_ut 00:13:53.683 LINK subsystem_ut 00:13:53.942 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:13:53.942 LINK ctrlr_discovery_ut 00:13:53.942 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:13:53.942 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:13:53.942 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:13:53.942 LINK nvme_ns_ut 00:13:53.942 LINK pci_event_ut 00:13:54.202 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:13:54.202 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:13:54.202 LINK subsystem_ut 00:13:54.202 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:13:54.202 LINK bit_array_ut 00:13:54.462 LINK cpuset_ut 00:13:54.462 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:13:54.462 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:13:54.462 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:13:54.462 LINK crc16_ut 00:13:54.462 LINK nvmf_ut 00:13:54.462 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:13:54.722 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:13:54.722 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:13:54.722 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:13:54.722 LINK crc32_ieee_ut 00:13:54.722 LINK rpc_ut 00:13:54.722 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:13:54.981 LINK nvme_ns_ocssd_cmd_ut 00:13:54.981 LINK crc32c_ut 00:13:54.981 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:13:54.981 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:13:54.981 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:13:54.981 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:13:54.981 LINK crc64_ut 00:13:54.981 LINK nvme_ns_cmd_ut 00:13:55.241 LINK nvme_poll_group_ut 00:13:55.241 CC test/unit/lib/util/dif.c/dif_ut.o 00:13:55.241 LINK nvme_pcie_ut 00:13:55.241 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:13:55.241 LINK nvme_quirks_ut 00:13:55.241 LINK nvme_qpair_ut 00:13:55.500 LINK idxd_user_ut 00:13:55.500 CC test/unit/lib/util/iov.c/iov_ut.o 00:13:55.500 CC test/unit/lib/rdma/common.c/common_ut.o 00:13:55.500 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:13:55.500 LINK iov_ut 00:13:55.758 LINK rdma_ut 00:13:55.758 LINK common_ut 00:13:55.758 CC test/unit/lib/util/math.c/math_ut.o 00:13:55.758 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:13:55.758 LINK math_ut 00:13:55.758 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:13:55.758 CC test/unit/lib/util/string.c/string_ut.o 00:13:56.017 LINK idxd_ut 00:13:56.017 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:13:56.017 LINK pipe_ut 00:13:56.017 LINK string_ut 00:13:56.017 LINK transport_ut 00:13:56.017 LINK nvme_tcp_ut 00:13:56.017 CC test/unit/lib/util/xor.c/xor_ut.o 00:13:56.017 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:13:56.276 LINK nvme_transport_ut 00:13:56.276 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:13:56.276 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:13:56.276 LINK xor_ut 00:13:56.276 LINK nvme_io_msg_ut 00:13:56.536 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:13:56.536 LINK dif_ut 00:13:56.536 LINK nvme_opal_ut 00:13:56.796 LINK nvme_fabric_ut 00:13:56.796 LINK nvme_pcie_common_ut 00:13:57.365 LINK nvme_rdma_ut 00:13:58.303 06:06:06 -- spdk/autopackage.sh@44 -- $ gmake -j10 clean 00:13:58.562 gmake[1]: Nothing to be done for 'clean'. 00:13:58.821 ps: stdin: not a terminal 00:14:02.111 gmake[2]: Nothing to be done for 'clean'. 00:14:02.679 06:06:10 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:14:02.679 06:06:10 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:14:02.679 06:06:10 -- common/autotest_common.sh@10 -- $ set +x 00:14:02.679 06:06:10 -- spdk/autopackage.sh@48 -- $ timing_finish 00:14:02.679 06:06:10 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:14:02.679 06:06:10 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:14:02.679 + [[ -n 1257 ]] 00:14:02.679 + sudo kill 1257 00:14:02.947 [Pipeline] } 00:14:02.967 [Pipeline] // timeout 00:14:02.972 [Pipeline] } 00:14:02.989 [Pipeline] // stage 00:14:02.994 [Pipeline] } 00:14:03.010 [Pipeline] // catchError 00:14:03.020 [Pipeline] stage 00:14:03.023 [Pipeline] { (Stop VM) 00:14:03.037 [Pipeline] sh 00:14:03.319 + vagrant halt 00:14:05.871 ==> default: Halting domain... 00:14:23.982 [Pipeline] sh 00:14:24.264 + vagrant destroy -f 00:14:26.800 ==> default: Removing domain... 00:14:26.814 [Pipeline] sh 00:14:27.098 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:14:27.110 [Pipeline] } 00:14:27.129 [Pipeline] // stage 00:14:27.134 [Pipeline] } 00:14:27.150 [Pipeline] // dir 00:14:27.156 [Pipeline] } 00:14:27.173 [Pipeline] // wrap 00:14:27.179 [Pipeline] } 00:14:27.194 [Pipeline] // catchError 00:14:27.203 [Pipeline] stage 00:14:27.205 [Pipeline] { (Epilogue) 00:14:27.219 [Pipeline] sh 00:14:27.500 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:14:27.512 [Pipeline] catchError 00:14:27.514 [Pipeline] { 00:14:27.529 [Pipeline] sh 00:14:27.811 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:14:27.811 Artifacts sizes are good 00:14:27.820 [Pipeline] } 00:14:27.835 [Pipeline] // catchError 00:14:27.845 [Pipeline] archiveArtifacts 00:14:27.851 Archiving artifacts 00:14:27.888 [Pipeline] cleanWs 00:14:27.899 [WS-CLEANUP] Deleting project workspace... 00:14:27.899 [WS-CLEANUP] Deferred wipeout is used... 00:14:27.904 [WS-CLEANUP] done 00:14:27.907 [Pipeline] } 00:14:27.923 [Pipeline] // stage 00:14:27.928 [Pipeline] } 00:14:27.941 [Pipeline] // node 00:14:27.947 [Pipeline] End of Pipeline 00:14:27.992 Finished: SUCCESS